24.3. 24.3 Mixed integer programming with bounded variables

Many decisions have both continuous and discrete nature. For example in the production of electric power the discrete decision is to switch on or not an equipment. The equipment can produce electric energy in a relatively wide range. Thus if the first decision is to switch on then a second decision must be made on the level of the produced energy. It is a continuous decision. The proper mathematical model of such problems must contain both discrete and continuous variables.

This section is devoted to the mixed integer linear programming problem with bounded integer variables. It is assumed that there are variables and a subset of them, say must be integer. The model has linear constraints in equation form and each integer variable has an explicit integer upper bound. It is also supposed that all variables must be nonnegative. More formally the mathematical problem is as follows.

where and are -dimensional vectors, is an matrix, is an -dimensional vector and finally all is a positive integer.

In the mathematical analysis of the problem below the the explicit upper bound constraints (24.33) will not be used. The Reader may think that they are formally included into the other algebraic constraints (24.32).

There are technical reasons that the algebraic constraints in (24.32) are claimed in the form of equations. Linear programming relaxation is used in the method. The linear programming problem is solved by the simplex method which needs this form. But generally speaking equations and inequalities can be transformed into one another in an equivalent way. Even in the numerical example discussed below inequality form is used.

First a numerical example is analyzed. The course of the method is discussed from geometric point of view. Thus some technical details remain concealed. Next simplex method and related topics are discussed. All technical details can be described only in the possession of them. Finally some strategic points of the algorithm are analyzed.

24.3.1. 24.3.1 The geometric analysis of a numerical example

The problem to be solved is

To obtain a relaxation the integrality constraints are omitted from the problem. Thus a linear programming problem of two variables is obtained.

Figure 24.2.  The geometry of linear programming relaxation of Problem (24.36) including the feasible region (triangle The geometry of linear programming relaxation of Problem (24.36) including the feasible region (triangle \overline{{\bf \mbox{OAB}}} ), the optimal solution (x_{1}=2.5,\, x_{2}=1.5 ), and the optimal level of the objective function represented by the line 2x_{1}+x_{2}=\frac{13}{2} .), the optimal solution (The geometry of linear programming relaxation of Problem (24.36) including the feasible region (triangle \overline{{\bf \mbox{OAB}}} ), the optimal solution (x_{1}=2.5,\, x_{2}=1.5 ), and the optimal level of the objective function represented by the line 2x_{1}+x_{2}=\frac{13}{2} .), and the optimal level of the objective function represented by the line The geometry of linear programming relaxation of Problem (24.36) including the feasible region (triangle \overline{{\bf \mbox{OAB}}} ), the optimal solution (x_{1}=2.5,\, x_{2}=1.5 ), and the optimal level of the objective function represented by the line 2x_{1}+x_{2}=\frac{13}{2} ..

The geometry of linear programming relaxation of Problem (24.36) including the feasible region (triangle \overline{{\bf \mbox{OAB}}} ), the optimal solution (x_{1}=2.5,\, x_{2}=1.5 ), and the optimal level of the objective function represented by the line 2x_{1}+x_{2}=\frac{13}{2} .

Figure 24.3.  The geometry of the course of the solution. The co-ordinates of the points are: O=(0,0), A=(0,3), B=(2.5,1.5), C=(2,1.8), D=(2,1.2), E=(The geometry of the course of the solution. The co-ordinates of the points are: O=(0,0), A=(0,3), B=(2.5,1.5), C=(2,1.8), D=(2,1.2), E=(\frac{5}{3} ,2), F=(0,2), G=(\frac{5}{3} ,1), H=(0,1), I=(1,2.4), and J=(1,2). The feasible regions of the relaxation are as follows. Branch 1: \overline{OAB} , Branch 2: \overline{OACD} , Branch 3: empty set, Branch 4: \overline{OHG} , Branch 5: \overline{AEF} , Branch 6: \overline{AIJF} , Branch 7: empty set (not on the figure). Point J is the optimal solution.,2), F=(0,2), G=(The geometry of the course of the solution. The co-ordinates of the points are: O=(0,0), A=(0,3), B=(2.5,1.5), C=(2,1.8), D=(2,1.2), E=(\frac{5}{3} ,2), F=(0,2), G=(\frac{5}{3} ,1), H=(0,1), I=(1,2.4), and J=(1,2). The feasible regions of the relaxation are as follows. Branch 1: \overline{OAB} , Branch 2: \overline{OACD} , Branch 3: empty set, Branch 4: \overline{OHG} , Branch 5: \overline{AEF} , Branch 6: \overline{AIJF} , Branch 7: empty set (not on the figure). Point J is the optimal solution.,1), H=(0,1), I=(1,2.4), and J=(1,2). The feasible regions of the relaxation are as follows. Branch 1: The geometry of the course of the solution. The co-ordinates of the points are: O=(0,0), A=(0,3), B=(2.5,1.5), C=(2,1.8), D=(2,1.2), E=(\frac{5}{3} ,2), F=(0,2), G=(\frac{5}{3} ,1), H=(0,1), I=(1,2.4), and J=(1,2). The feasible regions of the relaxation are as follows. Branch 1: \overline{OAB} , Branch 2: \overline{OACD} , Branch 3: empty set, Branch 4: \overline{OHG} , Branch 5: \overline{AEF} , Branch 6: \overline{AIJF} , Branch 7: empty set (not on the figure). Point J is the optimal solution., Branch 2: The geometry of the course of the solution. The co-ordinates of the points are: O=(0,0), A=(0,3), B=(2.5,1.5), C=(2,1.8), D=(2,1.2), E=(\frac{5}{3} ,2), F=(0,2), G=(\frac{5}{3} ,1), H=(0,1), I=(1,2.4), and J=(1,2). The feasible regions of the relaxation are as follows. Branch 1: \overline{OAB} , Branch 2: \overline{OACD} , Branch 3: empty set, Branch 4: \overline{OHG} , Branch 5: \overline{AEF} , Branch 6: \overline{AIJF} , Branch 7: empty set (not on the figure). Point J is the optimal solution., Branch 3: empty set, Branch 4: The geometry of the course of the solution. The co-ordinates of the points are: O=(0,0), A=(0,3), B=(2.5,1.5), C=(2,1.8), D=(2,1.2), E=(\frac{5}{3} ,2), F=(0,2), G=(\frac{5}{3} ,1), H=(0,1), I=(1,2.4), and J=(1,2). The feasible regions of the relaxation are as follows. Branch 1: \overline{OAB} , Branch 2: \overline{OACD} , Branch 3: empty set, Branch 4: \overline{OHG} , Branch 5: \overline{AEF} , Branch 6: \overline{AIJF} , Branch 7: empty set (not on the figure). Point J is the optimal solution., Branch 5: The geometry of the course of the solution. The co-ordinates of the points are: O=(0,0), A=(0,3), B=(2.5,1.5), C=(2,1.8), D=(2,1.2), E=(\frac{5}{3} ,2), F=(0,2), G=(\frac{5}{3} ,1), H=(0,1), I=(1,2.4), and J=(1,2). The feasible regions of the relaxation are as follows. Branch 1: \overline{OAB} , Branch 2: \overline{OACD} , Branch 3: empty set, Branch 4: \overline{OHG} , Branch 5: \overline{AEF} , Branch 6: \overline{AIJF} , Branch 7: empty set (not on the figure). Point J is the optimal solution., Branch 6: The geometry of the course of the solution. The co-ordinates of the points are: O=(0,0), A=(0,3), B=(2.5,1.5), C=(2,1.8), D=(2,1.2), E=(\frac{5}{3} ,2), F=(0,2), G=(\frac{5}{3} ,1), H=(0,1), I=(1,2.4), and J=(1,2). The feasible regions of the relaxation are as follows. Branch 1: \overline{OAB} , Branch 2: \overline{OACD} , Branch 3: empty set, Branch 4: \overline{OHG} , Branch 5: \overline{AEF} , Branch 6: \overline{AIJF} , Branch 7: empty set (not on the figure). Point J is the optimal solution., Branch 7: empty set (not on the figure). Point J is the optimal solution.

The geometry of the course of the solution. The co-ordinates of the points are: O=(0,0), A=(0,3), B=(2.5,1.5), C=(2,1.8), D=(2,1.2), E=(\frac{5}{3} ,2), F=(0,2), G=(\frac{5}{3} ,1), H=(0,1), I=(1,2.4), and J=(1,2). The feasible regions of the relaxation are as follows. Branch 1: \overline{OAB} , Branch 2: \overline{OACD} , Branch 3: empty set, Branch 4: \overline{OHG} , Branch 5: \overline{AEF} , Branch 6: \overline{AIJF} , Branch 7: empty set (not on the figure). Point J is the optimal solution.

Figure 24.4.  The course of the solution of Problem (24.36). The upper numbers in the circuits are explained in subsection 24.3.2. They are the corrections of the previous bounds obtained from the first pivoting step of the simplex method. The lower numbers are the (continuous) upper bounds obtained in the branch.

The course of the solution of Problem (24.36). The upper numbers in the circuits are explained in subsection 24.3.2. They are the corrections of the previous bounds obtained from the first pivoting step of the simplex method. The lower numbers are the (continuous) upper bounds obtained in the branch.


The branching is made according to a non-integer variable. Both and have fractional values. To keep the number of branches as low as possible, only two new branches are created in a step.

The numbering of the branches is as follows. The original set of feasible solutions is No. 1. When the two new branches are generated then the branch belonging to the smaller values of the branching variable has the smaller number. The numbers are positive integers started at 1 and not skipping any integer. Branches having no feasible solution are numbered, too.

The optimal solution of the relaxation is , and the optimal value is as it can be seen from figure 24.2. The optimal solution is the intersection point the lines determined by the equations

and

If the branching is based on variable then they are defined by the inequalities

Notice that the maximal value of is 2.5. In the next subsection the problem is revisited. Then this fact will be observed from the simplex tableaux. Variable would create the branches

None of them is empty. Thus it is more advantageous the branch according to . Geometrically it means that the set of the feasible solutions in the relaxed problem is cut by the line . Thus the new set becomes the quadrangle on Figure 24.3. The optimal solution on that set is . It is point C on the figure.

Now branching is possible according only to variable . Branches 4 and 5 are generated by the cuts and , respectively. The feasible regions of the relaxed problems are of Branch 4, and of Branch 5. The method continues with the investigation of Branch 5. The reason will be given in the next subsection when the quickly calculable upper bounds are discussed. On the other hand it is obvious that the set is more promising than if the Reader takes into account the position of the contour, i.e. the level line, of the objective function on Figure 24.3. The algebraic details discussed in the next subsection serve to realize the decisions in higher dimensions what is possible to see in 2-dimension.

Branches 6 and 7 are defined by the inequalities and , respectively. The latter one is empty again. The feasible region of Branch 6 is . The optimal solution in this quadrangle is the Point I. Notice that there are only three integer points in which are (0,3), (0,2), and (1,2). Thus the optimal integer solution of this branch is (1,2). There is a technique which can help to leap over the continuous optimum. In this case it reaches directly point J, i.e. the optimal integer solution of the branch as it will be seen in the next section, too. Right now assume that the integer optimal solution with objective function value 4 is uncovered.

At this stage of the algorithm the only unfathomed branch is Branch 4 with feasible region . Obviously the optimal solution is point G=(,1). Its objective function value is . Thus it cannot contain a better feasible solution than the known (1,2). Hence the algorithm is finished.

24.3.2. 24.3.2 The linear programming background of the method

The first ever general method solving linear programming problems were discovered by George Dantzig and called simplex method. There are plenty of versions of the simplex method. The main tool of the algorithm is the so-called dual simplex method. Although simplex method is discussed in a previous volume, the basic knowledge is summarized here.

Any kind of simplex method is a so-called pivoting algorithm. An important property of the pivoting algorithms is that they generate equivalent forms of the equation system and – in the case of linear programming – the objective function. Practically it means that the algorithm works with equations. As many variables as many linearly independent equations exist are expressed with other variables and further consequences are drawn from the current equivalent form of the equations.

If there are inequalities in the problem then they are reformulated by introducing nonnegative slack variables. E.g. in case of LP-relaxation of Problem (24.36) the equivalent form of the problem is

Notice that all variables appear in all equations including the objective function, but it is allowed that some coefficients are zeros. The current version (24.37) can be considered as a form where the variables and are expressed by and and the expression is substituted into the objective function. If then and , thus the solution is feasible. Notice that the value of the objective function is 0 and if it is possible to increase the value of any of and and still getting a feasible solution then a better feasible solution is obtained. It is true, because the method uses equivalent forms of the objective function. The method obtains better feasible solution by pivoting. Let and be the two expressed variables. Skipping some pivot steps the equivalent form of (24.37) is

That form has two important properties. First if then and , thus the solution is feasible, similarly to the previous case. Generally this property is called primal feasibility. Secondly, the coefficients of the non-expressed variables are negative in the objective function. It is called dual feasibility. It implies that if any of the non-expressed variables is positive in a feasible solution then that is worse than the current one. It is true again, because the current form of the objective function is equivalent to the original one. Thus the current value of the objective function which is , is optimal.

In what follows the sign of maximization and the nonnegativity of the variables will be omitted from the problem but they are kept in mind.

In the general case it may be assumed without loss of generality that all equations are independent. Simplex method uses the form of the problem

where is an matrix, and are -dimensional vectors, and is an -dimensional vector. According to the assumption that all equations are independent, has linearly independent columns. They form a basis of the -dimensional linear space. They also form an invertible submatrix. It is denoted by . The inverse of is . Matrix is partitioned into the basic and non-basic parts: and the vectors and are partitioned accordingly. Hence

The expression of the basic variables is identical with the multiplication of the equation by from left

where is the unit matrix. Sometimes the equation is used in the form

The objective function can be transformed into the equivalent form

Notice that the coefficients of the basic variables are zero. If the non-basic variables are zero valued then the value of the basic variables is given by the equation

Hence the objective function value of the basic solution is

Definition 24.7 A vector is a solution of Problem (24.39)-(24.41) if it satisfies the equation (24.40). It is a feasible solution or equivalently a primal feasible solution if it satisfies both (24.40) and (24.41). A solution is a basic solution if the columns of matrix belonging to the non-zero components of are linearly independent. A basic solution is a basic feasible or equivalently a basic primal feasible solution if it is feasible. Finally a basic solution is basic dual feasible solution if

The primal feasibility of a basic feasible solution is equivalent to

Let be the column vectors of matrix . Further on let and be the set of indices of the basic and non-basic variables, respectively. Then componentwise reformulation of (24.44) is

The meaning of the dual feasibility is this. The current value of the objective function given in (24.43) is an upper bound of the optimal value as all coefficients in the equivalent form of the objective function is nonpositive. Thus if any feasible, i.e. nonnegative, solution is substituted in it then value can be at most the constant term, i.e. the current value.

Definition 24.8 A basic solution is OPTIMAL if it is both primal and dual feasible.

It is known from the theory of linear programming that among the optimal solutions there is always at least one basic solution. To prove this statement is beyond the scope of the chapter.

In Problem (24.37)

If the basic variables are and then

Hence

The last equation is true as is the unit matrix. Finally

One can conclude that this basic solution is both primal and dual feasible.

There are two types of simplex methods. Primal simplex method starts from a primal feasible basic solution. Executing pivoting steps it goes through primal feasible basic solutions and finally even the dual feasibility achieved. The objective function values are monotone increasing in the primal simplex method.

The dual simplex method starts from a dual feasible basic solution it goes through dual feasible basic solutions until even primal feasibility is achieved in the last iteration. The objective function values are monotone decreasing in the dual simplex method. We discuss it in details as it is the main algorithmic tool of the method.

Each simplex method uses its own simplex tableau. Each tableau contains the transformed equivalent forms of the equations and the objective function. In the case of the dual simplex tableau the elements of it are derived from the form of the equations

where the second equation indicates that the minus sign is associated to non-basic variables. The dual simplex tableau contains the expression of all variables by the negative non-basic variables including the objective function variable and the non-basic variables. For the latter ones the trivial

equation is included. For example the dual simplex tableau for (24.37) is provided that the basic variables are and (see (24.38))

Generally speaking the potentially non-zero coefficients of the objective function are in the first row, the constant terms are in the first column and all other coefficients are in the inside of the tableau. The order of the rows is never changed. On the other hand a variable which left the basis immediately has a column instead of that variable which entered the basis.

The elements of the dual simplex tableau are denoted by where refers to the constant term of the equation of variable and otherwise and is the coefficient of the non-basic variable in the expression of the variable . As is the objective function variable is the coefficient of in the equivalent form (24.42) of the objective function. The dual simplex tableau can be seen on Figure 24.5.

Figure 24.5.  The elements of the dual simplex tableau.

The elements of the dual simplex tableau.


Notice that dual feasibility means that there are nonnegative elements in the first row of the tableau with the potential exception of its first element, i.e. with the potential exception of the objective function value.

Without giving the proof of its correctness the pivoting procedure is this. The aim of the pivoting is to eliminate the primal infeasibility, i.e. the negative values of the variables, with the potential exception of the objective function value, i.e. the elimination of the negative terms from the first column. Instead of that basic variable a non-basic one will be expressed from the equation such that the negative constant term becomes zero and the value of the new basic variable, i.e. the value of , becomes positive. It is easy to see that this requirement can be satisfied only if the new expressed variable, say , has a negative coefficient in the equation, i.e. . After the change of the basis the row of must become a negative unit vector as became a non-basic variable, thus its expression is

The transformation of the tableau consists of the transformations of the columns such that the form (24.45) of the row of is generated. The position of the (-1) in the row is the crossing of the row of and the column belonging to before pivoting. This column becomes the column of . There is another requirement claiming that the dual feasibility must hold on. Let be the column of the non-basic variable including as the column of the constant terms. Then the formulae of the column transformation are the followings where is either zero or the index of a non-basic variable different from :

and

To maintain dual feasibility means that after the change of the basis the relation must hold for all non-basic indices, i.e. for all . It follows from (24.46) that must be chosen such that

In the course of the branch method in the optimization of the relaxed subproblems dual simplex method can save a lot of computation. On the other hand what is used in the description of the method, is only the effect of one pivoting on the value of the objective function. According to (24.46) the new value is

Notice that and . Hence the objective function value decreases by the nonnegative value

The formula (24.48) will be used if a new constraint is introduced at branching and it cuts the previous optimal solution. As the new constraint has nothing to do with the objective function, it will not destroy dual feasibility, but, of course, the optimal solution of the relaxed problem of the branch becomes primal infeasible.

For example the inequality is added to the relaxation (24.37) defining a new branch then it is used in the equation form

where is nonnegative continuous variable. According to the simplex tableau

Hence

(24.49) is added to the problem in the form (24.50). Then the dual simplex tableau is

Only has a negative value, thus the first pivoting must be done in its row. Rule (24.47) selects for entering the basis. Then after the first pivot step the value of the objective function decreases by

If the optimal solution of the relaxed problem is not reached after the first pivoting then further decrease is possible. But decrease of 0.7 is sure compared to the previous upper bound.

Another important property of the cuts is that if it has no negative coefficient in the form how it is added to the simplex tableau then there is no negative pivot element, i.e. the relaxed problem is infeasible, i.e. the branch is empty. For example the cut leading to an empty branch is added to the problem in the form

where is also a nonnegative variable. Substituting the value of again the equation is transformed to

Hence the simplex tableau

is obtained. There is a negative value at the crossing point of the first column and the row of . But it is not possible to choose a pivot element in that row, as there is no negative element of the row. It means that feasibility can not be achieved, i.e. that branch is infeasible and thus it is completely fathomed.

24.3.3. 24.3.3 Fast bounds on lower and upper branches

The branching is always based on a variable which should be integer but in the current optimal solution of the linear programming relaxation it has fractional value. If it has fractional value then its value is non-zero thus it is basic variable. Assume that its index is . Remember that , , and are the index sets of the integer, basic, and non-basic variables, respectively. Hence . According to the last simplex tableau is expressed by the non-basic variables as follows:

As has fractional value

The branch created by the inequality

is called lower branch and the inequality

creates the upper branch.

Let and be the set of indices of the nonbasic variables according to the signs of the coefficients in (24.51), i.e.

First the lower branch is analyzed. It follows from (24.51) that the inequality (24.52) is equivalent to

Thus

is a nonnegative variable and row (24.53) can be added to the dual simplex tableau. It will contain the only negative element in the first column that is the optimization in the lower branch starts by pivoting in this row. (24.53) can be reformulated according to the signs of the coefficients as

The pivot element must be negative, thus it is one of 's with . Hence the first decrease (24.48) of the objective function is

In the upper branch the inequality (24.52) is equivalent to

Again the nonnegative slack variable should be introduced. Then the row which can be added to the simplex tableau is

Thus the pivot element must belong to one of the indices giving the value

Let be the upper bound on the original branch obtained by linear programming. Then the quantities and define the upper bounds of the objective functions and on the lower and upper subbranches, respectively. They are not substituting complete optimization in the subbranches. On the other hand they are easily computable and can give some orientation to the selection of the next branch for further investigation (see below).

The quantities and can be improved, i.e. increased. The claim that the variable defined in (24.54) is nonnegative implies that

In a similar way the nonnegativity of variable in (24.56) implies that

If (24.59) is multiplied by the positive number

then it gets the form

The inequalities (24.58) and (24.60) can be unified in the form:

Notice that (24.61) not the sum of the two inequalities. The same negative number stands on the left-hand side of both inequalities and is greater or equal than the right-hand side. Then both right-hand sides must have negative value. Hence the left-hand side is greater than their sum.

The same technique is applied to the variable instead of with

where 's are integer values to be determined later. can be expressed by the non-basic variables as

Obviously is an integer variable as well and its current value if the non-basic variables are fixed to zero is equal to the current value of . Thus it is possible to define the new branches according to the values of . Then the inequality of type (24.61) which is defined by , has the form

The appropriate quantities and are as follows:

where

and

further

where

and

The values of the integers must be selected in a way that the absolute values of the coefficients are as small as possible, because the inequality cuts the greatest possible part of the polyhedral set of the continuous relaxation in this way. (See Exercise 24.3-1.) To do so the absolute value of must be small. Depending on its sign it can be either , or , where is the fractional part of , i.e. .

Assume that . If then the term

is present among the terms of the minimum of the lower branch. If then it obviously is at least as great as the term

which appears in , i.e. in the right-hand side of (24.55). If then there is a term

is in the right-hand side of (24.57) . is a common multiplier in the terms (24.62) and (24.63), therefore it can be disregarded when the terms are compared. Under the assumption that it will be shown that

As is supposed to be negative the statement is equivalent to

Hence the inequality

must hold. It can be reduced to

It is true as and

If then according to (24.57) and (24.61) the term

is present among the terms of the minimum of the upper branch. In a similar way it can be shown that if then it is always at least as great as the term

which is present in the original formula (24.57).

Thus the rule of the choice of the integers 's is

24.3.4. 24.3.4 Branching strategies

The B&B frame doesn't have any restriction in the selection of the unfathomed node for the next branching in row 7 of Branch-and-Bound . First two extreme strategies are discussed with pros and cons. The same considerations have to be taken in almost all applications of B&B. The third method is a compromise between the two extreme ones. Finally methods more specific to the integer programming are discussed.

24.3.4.1.  The LIFO Rule

LIFO means “Last-In-First-Out”, i.e. one of the branches generated in the last iteration is selected. A general advantage of the rule is that the size of the enumeration tree and the size of the list remains as small as possible. In the case of the integer programming problem the creation of the branches is based on the integer values of the variables. Thus the number of the branches is at most if the branching variable is . In the LIFO strategy the number of leaves is strictly less then the number of the created branches on each level with the exemption of the deepest level. Hence at any moment the enumeration tree may not have more than

leaves.

The drawback of the strategy is that the flexibility of the enumeration is lost. The flexibility is one of the the main advantage of B&B in solving pure integer problems.

If the algorithm skips from one branch to another branch far away from the first one then it is necessary to reconstruct the second branch including not only the branching restrictions on the variables but any other information which is necessary to the bounding procedure. In the particular algorithm the procedure determining the bound is linear programming, more precisely a simplex method. If a new restriction as a linear constraint is added to the problem which cuts off the previous optimal solution, then the simplex tableau looses the primal feasibility but the dual feasibility still holds. Thus a dual simplex method can immediately start without carrying out a first phase. (The purpose of the first phase which itself is a complete optimization, is to find a primal or dual feasible basic solution depending for primal or dual simplex method, respectively.) If the B&B method skips to another branch then to get the new bound by the simplex method will require the execution of the first phase, too.

A further consideration concerns to the construction of feasible solutions. Generally speaking if good feasible solutions are known in the early phase of the algorithm then the whole procedure can be accelerated. In the current algorithm branching has a “constructive nature”. It means that the value of the branching variable becomes more restricted therefore it either becomes integer in the further optimal solutions in the subbranches of the branch, or it will be restricted further on. Thus it can be expected that sooner or later a complete integer solution is constructed which might be feasible or infeasible. On the other hand if the algorithm skips frequently in the phase when no feasible solution is known then it is very likely that any construction will be finished only later, i.e. the acceleration will not take place, because of the lack of feasible solution.

If a LIFO type step is to be done and the branching variable is then the lower branch should be chosen in step 7 of the algorithm, if

24.3.4.2.  The maximal bound

The other extreme strategy is that the branch having the maximal bound is selected in each iteration. The idea is simple and clear: it is the most promising branch therefore it worth to explore it.

Unfortunately the idea is not completely true. The bounds of the higher level branches are not accurate enough. This phenomenon has been discussed during the analysis of the numerical example in the subsection 24.1.3 in relation (24.12). Thus a somewhat smaller upper bound in a lower branch can indicate a more promising branch.

The maximal bound strategy can lead to a very wide enumeration tree which may cause memory problems. Moreover the construction of feasible solutions will be slow and therefore the relatively few solutions will be enumerated implicitly, i.e. the number of steps will be high, i.e. the method will be slow.

24.3.4.3.  Fast bounds and estimates

If the optimal solution of the relaxed problem is non-integer then it can have several fractional components. All of them must be changed to be integer to obtain the optimal integer programming solution of the branch. The change of the value of each currently fractional variable as a certain cost. The cost of the individual changes are estimated and summed up. The cost means the loss in the value of the objective function. An adjusted value of the bound of the branch is obtained if the sum of the estimated individual costs is subtracted from the current bound. It is important to emphasize that the adjusted value is not an upper or lower bound of the optimal value of integer programming solution of the branch but it is only a realistic estimation.

There are two ways to obtain the estimation. The first one uses the crude values of the fractionality. Let and be the fractional part of variable in the current branch and in the relaxed problem of the original problem, respectively. Further on let , , and be the optimal value of the relaxed problem in the current branch, in the original problem, and the value of the best feasible integer solution found so far. Generally speaking the measure of the fractionality of a real number is that how far is to the closest integer, i.e.

Hence the estimate is

(24.65) takes into account the average inaccuracy of the bounds.

The fast bounds defined in (24.55) and (24.57) can be used also for the same purpose. They concern to the correction of the fractionality of a single variable in the current branch. Hence the estimate

is a natural choice.

24.3.4.4.  A Rule based on depth, bound, and estimates

The constraints defining the branches are integer valued lower and upper bounds on the branching variables. Thus one can expect that these new constraints force the majority of the branching variables to be integer. It means that the integrality of the optimal solution of the relaxed problem improves with the depth of the branch. Thus it is possible to connect the last two rules on the following way. The current bound is abandoned and the algorithm selects the best bound is the improvement based on estimates is above a certain threshold.

24.3.5. 24.3.5 The selection of the branching variable

In selecting the branching variable again both the fractional part of the non-integer variables and the fast bounds have critical role. A further factor can be the information obtained from the user.

24.3.5.1.  Selection based on the fractional part

The most significant change can be expected from that variable which is farthest from integers as the cuts defining the two new branches cut the most. As the measure of fractionality is the rule suggest to choose the branching variable as

24.3.5.2.  Selection based on fast bounds

Upper bounds are

in the lower and upper branches of branch if the branching variable is .

Here are five possible selection criteria:

Which one can be offered for a B&B algorithm?

Notice that

is a correct upper bound of branch as it has been mentioned earlier. Thus (24.66) selects according to the most inaccurate upper bound. It is obviously not good. (24.68) makes just the opposite it selects the variable giving the most accurate bound. On the other hand

is the upper bound in the worse one of the two subbranches. The interest of the algorithm is that it will be fathomed without explicit investigation, i.e. the bound of this subbranch will be less than the objective function value of an integer feasible solution. Thus it is good if (24.71) is as small as possible. Hence (24.69) is a good strategy and (24.67) is not. Finally, (24.70) tries to separate the good and low quality feasible solutions. The conclusion is that (24.69) and (24.70) are the two best ones and (24.68) is still applicable, but (24.66) and (24.67) must be avoided.

24.3.5.3.  Priority rule

Assume that the numerical problem (24.31)-(24.35) is the model of an industrial problem. Then the final user is the manager and/or expert who must apply the decisions coded into the optimal solution. The expert may know that which factors (decisions) are the most critical ones from the point of view of the managerial problem and the industrial system. The variables belonging to these factors may have a special importance. Therefore it has sense if the user may define a priority order of variables. Then the first non-integer variable of the order can be selected as branching variable.

24.3.6. 24.3.6 The numerical example is revisited

The solution of the problem

(24.36)                                                                     

has been analyzed from geometric point of view in subsection 24.3.1. Now the above-mentioned methods will be applied and the same course of solution will be obtained.

After introducing the slack variables and the (primal) simplex method gives the equivalent form (24.38) of the equations and the objective function:

(24.38)                                                                     

Hence it is clear that the solution and . (24.38) gives the following optimal dual simplex tableaux:

The first two branches were defined by the inequalities and . The second one is an empty branch. The algebraic evidence of this fact is that there is no negative element in the row of , thus it is not possible to find a pivot element for the dual simplex method after introducing the cut. Now it will be shown in a detailed way. Let be the appropriate slack variable, i.e. the cut introduced in the form

The new variable must be expressed by the non-basic variables, i.e. by and :

Hence

When this row is added to the dual simplex tableaux, it is the only row having a negative constant term, but there is no negative coefficient of any non-basic variable proving that the problem is infeasible. Notice that the sign of a coefficient is an immediate consequence of the sign of the coefficient in the row of , i.e. it is not necessary to carry out the calculation of the row of and it is possible to conclude immediately that the branch is empty.

The fractional part equals . Hence the fast bound (24.55) of the lower branch defined by is

It means that the fast upper bound in the branch is 13/2-7/10=5.8. The bound can be rounded down to 5 as the objective function is integer valued.

Let be the slack variable of the cut , i.e. . Hence

If it is added to the simplex tableaux then the pivot element is . After the first pivot step the tableaux becomes optimal. It is

Notice that the optimal value is 5.8, i.e. exactly the same what was provided by the fast bound. The reason is that the fast bound gives the value of the objective function after the first pivot step. In the current case the first pivot step immediately produced the optimal solution of the relaxed problem.

is the only variable having non-integer value in simplex tableaux. Thus the branching must be done according to . The two new cuts defining the branches are and . There are both positive and negative coefficients in the row of , thus both the lower and upper branches exist. Moreover

Thus the continuous upper bound is higher on the upper branch, therefore it is selected first for further branching.

The constraint

are added to the problem. By using the current simplex tableaux the equation

is obtained. It becomes the last row of the simplex tableaux. In the first pivoting step enters the basis and leaves it. The first tableaux is immediately optimal and it is

Here both and are integer variables having non-integer values. Thus branching is possible according to both of them. Notice that the upper branch is empty in the case of , while the lower branch of is empty as well. is selected for branching as it is the variable of the original problem. Now

On the other hand the bound can be improved in accordance with (24.64) as , i.e. the coefficient of may be instead of . It means that the inequality

is claimed instead of

It is transferred to the form

Hence

The improved fast bound is obtained from

It means that the objective function can not be greater than 4. After the first pivoting the simplex tableau becomes

giving the feasible solution and with objective function value 4.

There is only one unfathomed branch which is to be generated from tableaux (24.72) by the constraint . Let be the slack variable. Then the equation

gives the cut

to be added to the tableaux. After two pivoting steps the optimal solution is

Although the optimal solution is not integer, the branch is fathomed as the upper bound is under 5, i.e. the branch can not contain a feasible solution better than the current best known integer solution. Thus the method is finished.

Exercises

24.3-1 Show that the rule of the choice of the integers (24.64) is not necessarily optimal from the point of view of the object function. (Hint. Assume that variable enters into the basis in the first pivoting. Compare the changes in the objective function value if its coefficient is and , respectively.)