Operations Research :- Unit II Duality in Linear Programming
Syllabus:
Duality
theory: a fundamental insight. The essence of duality theory, Economic
interpretation of duality, Primal dual relationship; Adapting to other primal
forms, The revised simplex method- development of optimality and feasibility
conditions, Revised Simplex Algorithms.
2.1 Duality theory: a fundamental insight
2.2 The essence of duality theory
Given our standard form for
the primal problem at the left (perhaps after
conversion from another form), its dual
problem has the form shown to
the right.
Thus, with the primal problem in maximization form,
the dual problem is in minimization form instead. Furthermore,
the dual problem uses exactly the same parameters as the
primal problem, but in different locations, as summarized below.
1. The
coefficients in the objective function of the primal problem are the right-hand
sides
of the functional constraints in the dual problem.
2. The
right-hand sides of the functional constraints in the primal problem are the
coef- ficients in the objective function of the dual problem.
3. The
coefficients of a variable in the functional constraints of the primal problem
are the coefficients in a functional constraint of the dual problem.
To highlight the comparison, now look at these same two
problems in matrix notation (as introduced at the beginning of Sec. 5.2), where c and y =
[y1, y2, . . . , ym] are row vec- tors but b and x are
column vectors.
To illustrate, the primal and dual problems for the
Wyndor Glass Co. example of Sec. 3.1 are shown in Table 6.1 in both algebraic
and matrix form.
The primal-dual table for linear
programming (Table 6.2) also helps to highlight the correspondence between the
two problems. It shows all the linear programming pa- rameters (the aij,
bi, and cj) and how they are used to construct the two
problems. All the headings for the primal problem are horizontal, whereas the
headings for the dual prob- lem are read by turning the book sideways. For the
primal problem, each column (ex- cept the right-side column)
gives the coefficients of a single variable in the respective constraints and
then in the objective function, whereas each row (except the
bottom one) gives the parameters for a single contraint. For the dual problem,
each row (except the right-side row) gives
the coefficients of a single variable in the respective constraints and then in
the objective function, whereas each column (except the
rightmost one) gives the parameters for a single constraint. In addition, the right-side column
gives the right-hand sides for the primal problem and the objective function
coefficients for the dual problem,
whereas the
bottom row gives the objective function coefficients for the primal problem and
the right-hand sides for the dual problem.
Consequently,
we now have the following general relationships between the primal and dual
problems.
1. The
parameters for a (functional) constraint in either problem are the
coefficients of a
variable in
the other problem.
2. The
coefficients in the objective
function of either problem
are the right-hand sides for the other problem.
Thus, there
is a direct correspondence between these entities in the two problems, as sum-
marized in Table 6.3. These correspondences are a key to some of the applications
of du- ality theory, including sensitivity analysis.
The Solved
Examples section of the book’s website provides another example of us- ing the primal-dual table
to construct the dual problem for a linear programming model.
Origin of the Dual Problem
Duality
theory is based directly on the fundamental
insight (particularly with
regard to row 0) presented in Sec. 5.3. To see why, we continue to use the
notation introduced in Table 5.9 for row 0 of the final tableau, except for replacing Z* by W* and dropping the asterisks
from z* and y* when referring to any tableau. Thus, at any given iteration of the simplex
method for the primal problem, the current numbers in row 0 are denoted as
shown in the (partial) tableau given in Table 6.4. For the coefficients of x1, x2, . . . , xn, recall that z =
(z1, z2, . . . , zn) denotes the vector that the
simplex method added to the vector of initial coefficients, –c, in the
process of reaching the current tableau. (Do not confuse z with
the value of the objective function Z.)
Similarly, since the initial coeffi- cients of xn+1, xn+2, . . . , xn+m in row 0 all are 0, y =
( y1, y2, . . . , ym) denotes the vec- tor that
the simplex method has added to these coefficients. Also recall [see Eq. (1) in
the statement of the fundamental insight in Sec. 5.3] that the fundamental
insight led to the following relationships between these quantities and the
parameters of the original model:
To illustrate these relationships with the Wyndor
example, the first equation gives W = 4y1 + 12y2
+ 18y3, which is just the objective function for the dual problem shown
in the upper right-hand box of Table 6.1. The second set of equations give z1
= y1 + 3y3 and z2 = 2y2 + 2y3,
which are the left-hand sides of the functional constraints for this dual
problem. Thus, by subtracting the right-hand sides of these > constraints (c1
= 3 and c2 = 5), (z1 – c1) and (z2 – c2)
can be interpreted as being the surplus variables for these
functional constraints.
The remaining key is to express what the simplex method
tries to accomplish (according to the optimality test) in terms of these
symbols. Specifically, it seeks a set of basic variables, and the corresponding
BF solution, such that all coefficients in row 0 are non- negative. It
then stops with this optimal solution. Using the notation in Table 6.4, this
goal is expressed symbolically as follows:
Condition for Optimality:
zj – cj >
0 for j = 1, 2, . . . , n,
yi > 0
for i = 1, 2, . . . , m.
After we substitute the preceding expression for zj, the
condition for optimality says that the simplex method can be interpreted as
seeking values for y1, y2, . . . , ym such
that
But, except for lacking an objective for W, this
problem is precisely the dual problem! To complete the formulation,
let us now explore what the missing objective should be.
Since W is just the current value of Z, and
since the objective for the primal problem is to maximize Z, a
natural first reaction is that W should be maximized also.
However, this is not correct for the following rather subtle reason: The only feasible solutions
for this new problem are those that satisfy the condition for optimality for
the primal problem. Therefore, it is only the optimal solution
for the primal problem that corresponds to a feasible solution for this new
problem. As a consequence, the optimal value of Z in the
primal problem is the minimum feasible value of W in
the new problem, so W should be minimized. (The full
justification for this conclusion is provided by the relationships we de- velop
in Sec. 6.3.) Adding this objective of minimizing W gives the complete dual
problem.
Consequently, the dual problem may be viewed as a
restatement in linear programming terms of the goal of the
simplex method, namely, to reach a solution for the primal problem that satisfies
the optimality test. Before this goal has been reached, the
corresponding y in row 0 (coefficients of slack variables) of
the current tableau must be in- feasible for the dual
problem. However, after the goal is reached, the corresponding y must
be an optimal solution (labeled y*) for the dual
problem, because it is a feasible solution that attains the minimum
feasible value of W. This optimal solution ( y1*, y2*,
. . . , ym*) provides for the primal problem the shadow prices that
were described in Sec. 4.7. Fur- thermore, this optimal W is
just the optimal value of Z, so the optimal objective
function values are equal for the two problems. This fact also implies
that cx ::: yb for any x and y that
are feasible for the primal and dual problems, respectively.
To illustrate, the left-hand side of Table 6.5 shows row
0 for the respective iterations when the simplex method is applied to the
Wyndor Glass Co. example. In each case, row 0 is partitioned into three parts:
the coefficients of the decision variables (x1, x2), the
coefficients of the slack variables (x3, x4, x5),
and the right-hand side (value of Z ). Since the coefficients
of the slack variables give the corresponding values of the dual variables ( y1, y2, y3),
each row 0 identifies a corresponding solution for the dual problem, as shown
in the y1, y2, and y3 columns of Table 6.5.
To interpret the next two columns, recall that (z1 – c1) and
(z2 – c2) are the surplus variables for the functional
constraints in the dual problem, so the full dual problem after augmenting with
these surplus variables is
Thus, a negative value for either surplus variable
indicates that the corresponding constraint is violated. Also included in the
rightmost column of the table is the calculated value of the dual objective
function W = 4y1 + 12y2 + 18y3.
As displayed in Table 6.4, all these
quantities to the right of row 0 in Table 6.5 already are identified by row 0
without requiring any new calculations. In particular, note in Table 6.5 how each number
obtained for the dual problem already appears in row 0 in the spot indicated by
Table 6.4.
For the initial row 0, Table 6.5 shows that the
corresponding dual solution (y1, y2, y3) = (0,
0, 0) is infeasible because both surplus variables are negative. The first
iteration succeeds in eliminating one of these negative values, but not the
other. After two it- erations, the optimality test is satisfied for the primal
problem because all the dual variables and surplus variables are nonnegative. This
dual solution (y , y , y ) = (0,
3, 1) is optimal (as could be verified by applying the simplex method directly
to the dual problem), so the optimal value of Z and W is Z *
= 36 = W *.
Now let us
summarize the newly discovered key relationships between the primal and dual
problems.
Weak duality
property: If x is a feasible solution for the
primal problem and y is a feasible solution for the
dual problem, then cx ::: yb.
For example,
for the Wyndor Glass Co. problem, one feasible solution is x1 = 3, x2 = 3, which yields Z = cx = 24, and one feasible solution
for the dual problem is y1
= 1, y2 = 1, y3 = 2, which yields a larger
objective function value W = yb = 52. These are just sample
feasible solutions for the two problems. For any such pair of feasible solutions,
this inequality must hold because the maximum feasible value of Z = cx (36) equals the min- imum feasible
value of the dual objective function W = yb, which is our next property.
Strong
duality property: If x*
is an optimal solution for the primal problem and y* is an optimal solution for
the dual problem, then
cx* = y*b.
Thus, these
two properties imply that cx < yb for feasible solutions if one or
both of them are not optimal for their respective problems,
whereas equality holds when both are optimal.
The weak duality property describes the relationship between
any pair of solutions for the primal and dual problems where both solutions are feasible for their respective problems. At
each iteration, the simplex method finds a specific pair of solutions for the
two problems, where the primal solution is feasible but the dual solution is not feasible (except at the final iteration).
Our next property describes this situation and the relation- ship between this
pair of solutions.
Complementary
solutions property: At each iteration, the simplex method si- multaneously
identifies a CPF solution x for the primal problem and a comple- mentary solution y for the dual problem (found in row
0, the coefficients of the slack variables), where
cx = yb.
If x is not optimal for the primal problem, then y is not feasible for the dual problem.
To
illustrate, after one iteration for the Wyndor Glass Co. problem, x1 = 0, x2 = 6, and
5
y1 = 0, y2
= , y
= 0, with cx = 30 = yb. This x is
feasible for the primal problem, but
2 3
this y is
not feasible for the dual problem (since it violates the constraint, y1 + 3y3 > 3).
The
complementary solutions property also holds at the final iteration of the
simplex
method,
where an optimal solution is found for the primal problem. However, more can be
said about the complementary solution y in this case, as presented in the
next property.
Complementary
optimal solutions property: At the final iteration, the simplex method simultaneously
identifies an optimal solution x*
for the primal problem and a complementary
optimal solution y* for the dual problem (found in row 0, the coefficients
of the slack variables), where are the shadow prices for the primal problem.
For the
example, the final iteration yields
We shall
take a closer look at some of these properties in Sec. 6.3. There you will see
that the complementary solutions property can be extended considerably further.
In partic- ular, after slack and surplus variables are introduced to augment
the respective problems, every basic solution in the primal problem has
a complementary basic solution in the dual problem. We
already have noted that the simplex method identifies the values of the sur-
plus variables for the dual problem as zj – cj in Table 6.4. This result then
leads to an ad- ditional complementary
slackness property that
relates the basic variables in one problem to the nonbasic variables in the
other (Tables 6.7 and 6.8), but more about that later.
In Sec. 6.4,
after describing how to construct the dual problem when the primal prob- lem is not in our standard form, we discuss
another very useful property, which is sum- marized as follows:
Symmetry
property: For any primal problem and its dual
problem, all relation- ships between them must be symmetric because the dual of this dual
problem is this primal problem.
Therefore,
all the preceding properties hold regardless of which of the two problems is
labeled as the primal problem. (The direction of the inequality for the weak
duality prop- erty does require that the primal problem be expressed or
reexpressed in maximization form and the dual problem in minimization form.)
Consequently, the simplex method can be applied to either problem, and it
simultaneously will identify complementary solutions (ultimately a complementary
optimal solution) for the other problem.
So far, we
have focused on the relationships between feasible or optimal solutions in the primal problem
and corresponding solutions in the dual problem. However, it is pos- sible that
the primal (or dual) problem either has no
feasible solutions or has
feasible so- lutions but no
optimal solution (because the
objective function is unbounded). Our final property summarizes the primal-dual
relationships under all these possibilities.
Duality
theorem: The following are the only possible relationships between
the primal and dual problems.
1. If
one problem has feasible
solutions and a bounded objective function (and so has an
optimal solution), then so does the other problem, so both the weak and strong
duality properties are applicable.
2. If
one problem has feasible
solutions and an unbounded objective function (and so no optimal solution), then the
other problem has no feasible
solutions.
3. If
one problem has no feasible
solutions, then the other
problem has either no feasible
solutions or an unbounded objective function.
Applications
As we have
just implied, one important application of duality theory is that the dual prob- lem can be solved directly
by the simplex method in order to identify an optimal solution for the primal
problem. We discussed in Sec. 4.8 that the number of functional constraints
affects the computational effort of the simplex method far more than the number
of vari- ables does. If m > n, so that the dual problem has fewer
functional constraints (n) than the primal problem (m), then
applying the simplex method directly to the dual problem instead of the primal
problem probably will achieve a substantial reduction in computa- tional
effort.
The weak and strong duality properties describe key relationships between
the primal and dual problems. One useful application is for evaluating a
proposed solution for the pri- mal problem. For example, suppose that x is
a feasible solution that has been proposed for implementation and that a feasible
solution y has been found by inspection for
the dual
problem such
that cx = yb. In this case, x must
be optimal without the simplex method even
being applied! Even if cx < yb, then yb still provides an upper bound on
the optimal value of Z, so if yb – cx is small, intangible factors
favoring x may lead to its selection without
further ado.
One of the
key applications of the complementary solutions property is its use in the dual
simplex method presented in Sec. 8.1. This algorithm operates on the primal
problem exactly as if the simplex method were being applied simultaneously to
the dual problem, which can be done because of this property. Because the roles
of row 0 and the right side in the simplex tableau have been reversed, the dual
simplex method re- quires that row 0 begin
and remain nonnegative while
the right side begins with some negative values (subsequent iterations
strive to reach a nonnegative right side). Conse- quently, this algorithm
occasionally is used because it is more convenient to set up the initial
tableau in this form than in the form required by the simplex method. Further-
more, it frequently is used for reoptimization (discussed in Sec. 4.7), because
changes in the original model lead to the revised final tableau fitting this form.
This situation is common for certain types of sensitivity analysis, as you will
see in the next chapter.
In general
terms, duality theory plays a central role in sensitivity analysis. This role
is the topic of Sec. 6.5.
Another
important application is its use in the economic interpretation of the dual
prob- lem and the resulting insights for analyzing the primal problem. You
already have seen one example when we discussed shadow prices in Sec. 4.7.
Section 6.2 describes how this in- terpretation extends to the entire dual
problem and then to the simplex method.
2.3 Economic interpretation of duality
The linear programming problem can be viewed
as a resource allocation model in which the objective is to maximize revenue
subject to the availability of limited resources. Looking at the problem from
this standpoint, the associated dual problem offers interesting economic
interpretations of the LP resource allocation model. To formalize the
discussion, we consider the following representation of the general primal and
dual problems:
2.3.1 Economic Interpretation of Dual
Variables
2.3.2 Economic Interpretation of Dual
Constraints
2.4 Primal dual relationship
Changes made in the original LP model will
change the elements of the current optimal tableau, which in turn may affect
the optimality and/or the feasibility of the current solution. This section
introduces a number of primal-dual relationships that can be used to recompute
the elements of the optimal simplex tableau. These relationships will form the
basis for the economic interpretation of the LP model as well as for post
optimality analysis.
·
Review of Simple Matrix Operations
The simplex tableau computations use only
three elementary matrix operations:
(row vector) X (matrix), (matrix) x (column vector), and
(scalar) X (matrix).These operations are summarized here for convenience.
First, we introduce some matrix definitions:
1. A matrix, A,
of size (m X n) is a rectangular array of elements with m rows and
n columns.
2. A row vector, V,
of size m is a (1 X m) matrix.
3. A column vector, P,
of size n is an (n xl) matrix.
These definitions can be represented
mathematically as
·
Simplex Tableau Layout
Figure 4.1 gives a schematic representation of the starting
and general simplex tableaus. In the starting tableau, the
constraint coefficients under the starting variables form an identity matrix
(all main-diagonal elements equal 1 and all off-diagonal elements equal zero).
With this arrangement, subsequent iterations of the simplex tableau generated
by the Gauss-Jordan row operations (see Chapter 3) will modify the elements of
the identity matrix to produce what is known as the inverse matrix. As we will
see in the remainder of this chapter, the inverse matrix is key to computing
all the elements of the associated simplex tableau.
·
Optimal Dual Solution
The primal and dual solutions are so closely related that
the optimal solution of either problem directly yields (with little additional
computation) the optimal solution to the other. Thus, in an LP model in which
the number of variables is considerably smaller than the number of constraints,
computational savings may be realized by solving the dual, from which the
primal solution is determined automatically. This result follows because the
amount of simplex computation depends largely (though not totally) on the
number of constraints (see Problem 2, Set 4.2c).
This section provides two methods for determining the
dual values. Note that the dual of the dual is itself the primal, which means
that the dual solution can also be used to yield the optimal primal solution
automatically.
·
Simplex Tableau Computations
This
section shows how any iteration of the entire simplex tableau can be
generated from the original data of the problem, the inverse associated
with the iteration, and the dual problem. Using the layout of the simplex
tableau in Figure 4.1, we can divide the computations into two types:
1.
Constraint columns (left- and right-hand sides).
2. Objective z-row.
2.5 Adapting to other primal forms
Thus far it has been assumed that the model for the
primal problem is in our standard form. However, we indicated at the beginning
of the chapter that any linear programming problem, whether in our standard
form or not, possesses a dual problem. Therefore, this section focuses on how
the dual problem changes for other primal forms.
Each nonstandard form was discussed in Sec. 4.6, and we
pointed out how it is possible to convert each one to an equivalent standard
form if so desired. These conversions are summarized in Table 6.12. Hence, you
always have the option of converting any model to our standard form and then constructing
its dual problem in the usual way. To illus- trate, we do this for our standard
dual problem (it must have a dual also) in Table 6.13. Note that what we end up
with is just our standard primal problem! Since any pair of pri- mal and dual
problems can be converted to these forms, this fact implies that the dual of
the dual problem always is the primal problem. Therefore, for any primal
problem and its dual problem, all relationships between them must be symmetric.
This is just the sym- metry property already stated in Sec. 6.1 (without proof
), but now Table 6.13 demon- strates why it holds.
One consequence of the symmetry property is that all the
statements made earlier in the chapter about the relationships of the dual
problem to the primal problem also hold in reverse.
Another
consequence is that it is immaterial which problem is called the primal and
which is called the dual. In practice, you might see a linear programming
problem fit- ting our standard form being referred to as the dual problem. The
convention is that the model formulated to fit the actual problem is called the
primal problem, regardless of its form.
Our
illustration of how to construct the dual problem for a nonstandard primal
problem did not involve either equality constraints or variables unconstrained
in sign. Actu- ally, for these two forms, a shortcut is available. It is
possible to show (see Probs. 6.4-7 and 6.4-2a) that an equality constraint in the primal problem should be
treated just like a ::: constraint in constructing the dual problem except that
the nonnegativity constraint for the corresponding dual variable should be deleted (i.e., this variable is
unconstrained in sign). By the symmetry property, deleting a nonnegativity
constraint in the primal prob- lem affects the dual problem only by changing
the corresponding inequality constraint to an equality constraint.
Another
shortcut involves functional constraints in > form for a maximization
problem. The straightforward (but longer) approach would begin by converting
each such con- straint to ::: form Constructing the dual problem in the usual
way then gives –aij as the
coefficient of yi in functional constraint j (which
has > form) and a coefficient of –bi in
the objec- tive function (which is to be minimized), where yi also has a nonnegativity
constraint yi > 0. Now suppose we define a
new variable yi‘ = –yi.
The changes caused by ex- pressing the dual problem in terms of yi‘ instead of yi are that (1) the coefficients of
the variable become aij for functional constraint j and bi for the objective function and (2)
the constraint on the variable becomes yi‘
: : : 0 (a nonpositivity
constraint). The shortcut is to use yi‘
instead of yi as a dual variable so that the
parameters in the orig- inal constraint (aij and bi) immediately become the
coefficients of this variable in the dual problem.
Here is a
useful mnemonic device for remembering what the forms of dual constraints
should be. With a maximization problem, it might seem sensible for a functional constraint to be
in ::: form, slightly odd to be in = form, and somewhat bizarre to be in > form. Similarly, for
a minimization problem, it might seem sensible to be in > form, slightly odd to be in = form, and somewhat bizarre to be in ::: form. For the
constraint on an individ- ual variable in either kind of problem, it might seem sensible to have a nonnegativity con-
straint, somewhat odd to have no constraint (so the
variable is unconstrained in sign), and quite bizarre for the variable to be restricted
to be less than or equal to zero. Now recall
the correspondence between entities in the primal and dual problems indicated
in Table 6.3; namely, functional constraint i in one problem corresponds to
variable i in the other prob- lem, and vice
versa. The sensible-odd-bizarre
method, or SOB method for short, says that the form of a
functional constraint or the constraint on a variable in the dual problem
should be sensible, odd, or bizarre, depending on whether the form for the
corresponding entity in the primal problem is sensible, odd, or bizarre. Here
is a summary.
The SOB Method for Determining the Form of Constraints in
the Dual.3
1. Formulate
the primal problem in either maximization form or minimization form, and then
the dual problem automatically will be in the other form.
2. Label
the different forms of functional constraints and of constraints on individual
vari- ables in the primal problem as being sensible,
odd, or bizarre according to Table 6.14. The
labeling of the functional constraints depends on whether the problem is a maximization problem (use the second column) or
a minimization problem (use the third column).
3. For
each constraint on an individual
variable in the dual problem,
use the form that has the same label as for the functional constraint in the
primal problem that corre- sponds to this dual variable (as indicated by Table
6.3).
4. For
each functional constraint in the dual problem, use the form
that has the same la- bel as for the constraint on the corresponding individual
variable in the primal prob- lem (as indicated by Table 6.3).
The arrows
between the second and third columns of Table 6.14 spell out the corre-
spondence between the forms of constraints in the primal and dual. Note that
the corre- spondence always is between a functional constraint in one problem
and a constraint on an individual variable in the other problem. Since the
primal problem can be either a max- imization or minimization problem, where
the dual then will be of the opposite type, the second column of the table
gives the form for whichever is the maximization problem and the third column
gives the form for the other problem (a minimization problem).
To illustrate, consider the radiation therapy example
presented at the beginning of Sec. 3.4. To show the conversion in both
directions in Table 6.14, we begin with the max- imization form of this model
as the primal problem, before using the (original) mini- mization form.
The primal problem in maximization form is shown on the
left side of Table 6.15. By using the second column of Table 6.14 to represent
this problem, the arrows in this table indicate the form of the dual problem in
the third column. These same arrows are used in Table 6.15 to show the
resulting dual problem. (Because of these arrows, we have placed the functional
constraints last in the dual problem rather than in their usual top position.)
3This particular mnemonic device (and a related one) for
remembering what the forms of the dual constraints should be has been suggested
by Arthur T. Benjamin, a mathematics professor at Harvey Mudd College. An in-
teresting and wonderfully bizarre fact about Professor Benjamin himself is that
he is one of the world’s great human calculators who can perform such feats as
quickly multiplying six-digit numbers in his head. For a further discussion and
derivation of the SOB method, see A. T. Benjamin: “Sensible Rules for
Remembering Duals — The S-O-B Method,” SIAM Review, 37(1):
85–87, 1995.
Beside each constraint in both problems, we have inserted
(in parentheses) an S, O, or B to label the form as sensible, odd, or bizarre.
As prescribed by the SOB method, the label for each dual constraint always is
the same as for the corresponding primal constraint.
However, there was no need (other than for illustrative
purposes) to convert the pri- mal problem to maximization form. Using the
original minimization form, the equivalent primal problem is shown on the left
side of Table 6.16. Now we use the third column of Table 6.14
to represent this primal problem, where the arrows indicate the form of the
dual problem in the second column. These same arrows in Table
6.16 show the resulting dual problem on the right side. Again, the labels on
the constraints show the application of the SOB method.
Just as the primal problems in Tables 6.15 and 6.16 are
equivalent, the two dual problems also are completely equivalent. The key to
recognizing this equivalency lies in the fact that the variables in each
version of the dual problem are the negative of those in the other version (y1’=
–y1, y2’= –y2, y3 = –y3′).
Therefore, for each version, if the vari- ables in the other version are used
instead, and if both the objective function and the constraints are multiplied
through by -1, then the other version is obtained. (Problem 6.4-5 asks you to
verify this.)
If you would like to see another example of
using the SOB method to construct a dual problem, one is given in the Solved
Examples section of the book’s website.
If the simplex method is to be applied to either a primal
or a dual problem that has any variables constrained to be nonpositive (for
example, y3′ : : : 0 in the dual problem of Table 6.15), this
variable may be replaced by its nonnegative counterpart (for
example, y3 = –y3′).
When artificial variables are used to help the simplex
method solve a primal problem, the duality interpretation of row 0 of the
simplex tableau is the following: Since ar- tificial variables play the role of
slack variables, their coefficients in row 0 now provide the values of the
corresponding dual variables in the complementary basic solution for the dual
problem. Since artificial variables are used to replace the real problem with a
more convenient artificial problem, this dual problem actually is the dual of
the artificial prob- lem. However, after all the artificial variables become nonbasic,
we are back to the real primal and dual problems. With the two-phase method,
the artificial variables would need to be retained in phase 2 in order to read
off the complete dual solution from row 0. With the Big M method,
since M has been added initially to the coefficient of each
artificial variable in row 0, the current value of each corresponding dual
variable is the current co- efficient of this artificial variable minus
M.
For example, look at row 0 in the final simplex tableau
for the radiation therapy example, given at the bottom of Table 4.12.
After M is subtracted from the coefficients of the artificial
variables x 4 and x 6, the optimal solution
for the corresponding dual problem given in Table 6.15 is read from the
coefficients of x3, x 4, and x 6
as (y1, y2, y3′) = (0.5, -1.1, 0). As usual, the
surplus variables for the two functional constraints are read from the
coefficients of x1 and x2 as z1 – c1
= 0 and z2 – c2 = 0.
2.6 The revised simplex method- development of optimality
and feasibility conditions
Section 7.1.1 shows that the optimum solution of a linear
program is always associated with a basic (feasible) solution. The simplex
method search for the optimum starts by selecting a feasible basis, B, and then
moving to another basis, Bnext, that yields a better (or, at least,
no worse) value of the objective function. Continuing in this manner, the
optimum basis is eventually reached.
The iterative steps of the revised simplex method
are exactly the same as in the tableau simplex method presented in
Chapter 3. The main difference is that the computations in the revised method
are based on matrix manipulations rather than on row operations. The use of
matrix algebra reduces the adverse effect of machine roundoff error by controlling
the accuracy of computing B-1. This result follows because, as
Section 7.1.2 shows, the entire simplex tableau can be computed from the original
data and the current Bnext. In the tableau simplex method of
Chapter 3, each tableau is generated from the immediately preceding one, which
tends to worsen the problem of rounoff error.
2.6.1 Development of the Optimality and
Feasibility Conditions
2.6.2 Revised Simplex Algorithms
Comments
Post a Comment