You are on page 1of 4

Lagrangian relaxation

Lagrangian relaxation
In the field of mathematical optimization, Lagrangian relaxation is a relaxation method which approximates a difficult problem of constrained optimization by a simpler problem. A solution to the relaxed problem is an approximate solution to the original problem, and provides useful information. The method penalizes violations of inequality constraints using a Lagrangian multiplier, which imposes a cost on violations. When the Lagrangian multiplier is nonnegative and nonzero, some inequality constraint can be violated. In practice, the Lagrangian relaxed problem can be solved more easily than the original problem. The problem of maximizing the Lagrangian function of the dual variables (the Lagrangian multipliers) is the Lagrangian dual problem.

Mathematical description
Given a linear programming problem and
max s.t.

of the following form:

If we split the constraints in system:

such that

and

we may write the

max s.t. (1) (2)

We may introduce the constraint (2) into the objective:


max s.t. (1)

If we let original problem.

be nonnegative weights, we get penalized if we violate the constraint (2), and we are

also rewarded if we satisfy the constraint strictly. The above system is called the Lagrangian Relaxation of our

Lagrangian relaxation

The LR solution as a bound


Of particular use is the property that for any fixed set of to the original problem, and let values, the optimal result to the Lagrangian Relaxation be the optimal solution problem will be no smaller than the optimal result to the original problem. To see this, let

be the optimal solution to the Lagrangian Relaxation. We can then see that

The first inequality is true because

is feasible in the original problem and the second inequality is true because

is the optimal solution to the Lagrangian Relaxation.

Iterating towards a solution of the original problem


The above inequality tells us that if we minimize the maximum value we obtain from the relaxed problem, we obtain a tighter limit on the objective value of our original problem. Thus we can address the original problem by instead exploring the partially dualized problem
min s.t.

where we define

as
max s.t. (1)

A Lagrangian Relaxation algorithm thus proceeds to explore the range of feasible minimize the result returned by the inner problem. Each value returned by

values while seeking to

is a candidate upper bound to the

problem, the smallest of which is kept as the best upper bound. If we additionally employ a heuristic, probably seeded by the values returned by , to find feasible solutions to the original problem, then we can iterate until the best upper bound and the cost of the best feasible solution converge to a desired tolerance.

References
Books
Ravindra K. Ahuja, Thomas L. Magnanti, and James B. Orlin (1993). Network Flows: Theory, Algorithms and Applications. Prentice Hall. ISBN0-13-617549-X. Bertsekas, Dimitri P. (1999). Nonlinear Programming: 2nd Edition. Athena Scientific. ISBN 1-886529-00-0. Bonnans, J.Frdric; Gilbert, J.Charles; Lemarchal, Claude; Sagastizbal, ClaudiaA. (2006). Numerical optimization: Theoretical and practical aspects [1]. Universitext (Second revised ed. of translation of 1997 French ed.). Berlin: Springer-Verlag. pp.xiv+490. doi:10.1007/978-3-540-35447-5. ISBN3-540-35445-X. MR2265882. Hiriart-Urruty, Jean-Baptiste; Lemarchal, Claude (1993). Convex analysis and minimization algorithms, VolumeI: Fundamentals. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. 305. Berlin: Springer-Verlag. pp.xviii+417. ISBN3-540-56850-6. Hiriart-Urruty, Jean-Baptiste; Lemarchal, Claude (1993). "14 Duality for Practitioners". Convex analysis and minimization algorithms, VolumeII: Advanced theory and bundle methods. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. 306. Berlin: Springer-Verlag. pp.xviii+346. ISBN3-540-56852-2. Lasdon, LeonS. (2002). Optimization theory for large systems (reprint of the 1970 Macmillan ed.). Mineola, New York: Dover Publications, Inc.. pp.xiii+523. MR1888251.

Lagrangian relaxation Lemarchal, Claude (2001). "Lagrangian relaxation". In Michael Jnger and Denis Naddef. Computational combinatorial optimization: Papers from the Spring School held in Schlo Dagstuhl, May1519,2000. Lecture Notes in Computer Science. 2241. Berlin: Springer-Verlag. pp.112156. doi:10.1007/3-540-45586-8_4. ISBN3-540-42877-1. MR1900016. Minoux, M. (1986). Mathematical programming: Theory and algorithms (Translated by Steven Vajda from the (1983 Paris: Dunod) French ed.). Chichester: A Wiley-Interscience Publication. John Wiley & Sons, Ltd.. pp.xxviii+489. ISBN0-471-90170-9. MR868279. (2008 Second ed., in French: Programmation mathmatique: Thorie et algorithmes. Editions Tec & Doc, Paris, 2008. xxx+711 pp. ISBN-13: 978-2-7430-1000-3 MR2571910).

Articles
Everett, Hugh, III (1963). "Generalized Lagrange multiplier method for solving problems of optimum allocation of resources" [2]. Operations Research 11 (3): 399417. doi:10.1287/opre.11.3.39. JSTOR168028. MR152360. Kiwiel, KrzysztofC.; Larsson, Torbjrn; Lindberg, P.O. (August 2007). "Lagrangian relaxation via ballstep subgradient methods" [3]. Mathematics of Operations Research 32 (3): 669686. doi:10.1287/moor.1070.0261. MR2348241.

References
[1] http:/ / www. springer. com/ mathematics/ applications/ book/ 978-3-540-35445-1 [2] http:/ / or. journal. informs. org/ cgi/ reprint/ 11/ 3/ 399 [3] http:/ / mor. journal. informs. org/ cgi/ content/ abstract/ 32/ 3/ 669

Article Sources and Contributors

Article Sources and Contributors


Lagrangian relaxation Source: http://en.wikipedia.org/w/index.php?oldid=455915882 Contributors: Bfg, Encyclops, Headbomb, Kiefer.Wolfowitz, Nealeyoung, Rjwilmsi, 14 anonymous edits

License
Creative Commons Attribution-Share Alike 3.0 Unported //creativecommons.org/licenses/by-sa/3.0/

You might also like