Linear programming
Linear programming
Many of the problems for which we want algorithms are optimization tasks: the shortest path, the cheapest spanning tree, the longest increasing subsequence, and so on. In such cases, we seek a solution that
(1) satisfies certain constraints (for instance, the path must use edges of the graph and lead from s to t, the tree must touch all nodes, the subsequence must be increasing);
(2) is the best possible, with respect to some well-defined criterion, among all solutions that satisfy these constraints.
Linear programming describes a broad class of optimization tasks in which both the constraints and the optimization criterion are linear functions. It turns out an enormous number of problems can be expressed in this way.
In a linear programming problem we are given a set of variables, and we want to assign real values to them so as to
(1) satisfy a set of linear equations and/or linear inequalities involving these variables and
(2) maximize or minimize a given linear objective function.
In a linear programming problem we are given a set of variables, and we want to assign real values to them so as to (1) satisfy a set of linear equations and/or linear inequalities involving these variables and (2) maximize or minimize a given linear objective function.
It is a general rule of linear programs that the optimum is achieved at a vertex of the feasible region. The only exceptions are cases in which there is no optimum; this can happen in two ways:
1. The linear program is infeasible; that is, the constraints are so tight that it is impossible to satisfy all of them. For instance, x ≤ 1, x ≥ 2.
2. The constraints are so loose that the feasible region is unbounded, and it is possible to achieve arbitrarily high objective values. For instance, max x1 + x2 x1, x2 ≥ 0
As evidenced in our examples, a general linear program has many degrees of freedom.
1. It can be either a maximization or a minimization problem.
2. Its constraints can be equations and/or inequalities.
3. The variables are often restricted to be nonnegative, but they can also be unrestricted in sign.
Definition of Linear Programming
More formally, a linear programming problem is specified as follows.
Given:
• n variables x1,... ,xn.
• m linear inequalities in these variables (equalities OK too). E.g., 3x1 + 4x2 ≤ 6, 0 ≤ x1 ≤ 3, etc.
• We may also have a linear objective function. E.g., 2x1 + 3x2 + x3.
Goal:
• Find values for the xi ’s that satisfy the constraints and maximize the objective. (In the “feasibility problem” there is no objective function: we just want to satisfy the constraints.)
18.4. MODELING PROBLEMS AS LINEAR PROGRAMS 97
For instance, let’s write out our time allocation problem this way.
Variables: S, P, E.
Objective: maximize 2P + E,
subject to Constraints: S + P + E = 168 E ≥ 56
S ≥ 60
2S + E − 3P ≥ 150
P + E ≥ 70 P ≥ 0 (can’t spend negative time partying)
linear programs can be solved in polynomial time
References:
http://www.cs.cmu.edu/~avrim/451f11/lectures/lect1101.pdf
Many of the problems for which we want algorithms are optimization tasks: the shortest path, the cheapest spanning tree, the longest increasing subsequence, and so on. In such cases, we seek a solution that
(1) satisfies certain constraints (for instance, the path must use edges of the graph and lead from s to t, the tree must touch all nodes, the subsequence must be increasing);
(2) is the best possible, with respect to some well-defined criterion, among all solutions that satisfy these constraints.
Linear programming describes a broad class of optimization tasks in which both the constraints and the optimization criterion are linear functions. It turns out an enormous number of problems can be expressed in this way.
In a linear programming problem we are given a set of variables, and we want to assign real values to them so as to
(1) satisfy a set of linear equations and/or linear inequalities involving these variables and
(2) maximize or minimize a given linear objective function.
In a linear programming problem we are given a set of variables, and we want to assign real values to them so as to (1) satisfy a set of linear equations and/or linear inequalities involving these variables and (2) maximize or minimize a given linear objective function.
It is a general rule of linear programs that the optimum is achieved at a vertex of the feasible region. The only exceptions are cases in which there is no optimum; this can happen in two ways:
1. The linear program is infeasible; that is, the constraints are so tight that it is impossible to satisfy all of them. For instance, x ≤ 1, x ≥ 2.
2. The constraints are so loose that the feasible region is unbounded, and it is possible to achieve arbitrarily high objective values. For instance, max x1 + x2 x1, x2 ≥ 0
As evidenced in our examples, a general linear program has many degrees of freedom.
1. It can be either a maximization or a minimization problem.
2. Its constraints can be equations and/or inequalities.
3. The variables are often restricted to be nonnegative, but they can also be unrestricted in sign.
Definition of Linear Programming
More formally, a linear programming problem is specified as follows.
Given:
• n variables x1,... ,xn.
• m linear inequalities in these variables (equalities OK too). E.g., 3x1 + 4x2 ≤ 6, 0 ≤ x1 ≤ 3, etc.
• We may also have a linear objective function. E.g., 2x1 + 3x2 + x3.
Goal:
• Find values for the xi ’s that satisfy the constraints and maximize the objective. (In the “feasibility problem” there is no objective function: we just want to satisfy the constraints.)
18.4. MODELING PROBLEMS AS LINEAR PROGRAMS 97
For instance, let’s write out our time allocation problem this way.
Variables: S, P, E.
Objective: maximize 2P + E,
subject to Constraints: S + P + E = 168 E ≥ 56
S ≥ 60
2S + E − 3P ≥ 150
P + E ≥ 70 P ≥ 0 (can’t spend negative time partying)
linear programs can be solved in polynomial time
References:
http://www.cs.cmu.edu/~avrim/451f11/lectures/lect1101.pdf
Subscribe to:
Post Comments
(
Atom
)
No comments :
Post a Comment