Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Paths to constrained Nash equilibria

Paths to constrained Nash equilibria We propose and analyze a primal-dual, infinitesimal method for locating Nash equilibria of constrained, non-cooperative games. The main object is a family of nonstandard Lagrangian functions, one for each player. With respect to these functions the algorithm yields separately, in differential form, directions of steepest-descent in all decision variables and steepest-ascent in all multipliers. For convergence we need marginal costs to be monotone and constraints to be convex inequalities. The method is largely decomposed and amenable for parallel computing. Other noteworthy features are: non-smooth data can be accommodated; no projection or optimization is needed as subroutines; multipliers converge monotonically upward; and, finally, the implementation amounts, in essence, only to numerical integration. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Applied Mathematics and Optimization Springer Journals

Paths to constrained Nash equilibria

Applied Mathematics and Optimization , Volume 27 (3) – Feb 4, 2005

Loading next page...
 
/lp/springer-journals/paths-to-constrained-nash-equilibria-5tnvZvuyLS

References (25)

Publisher
Springer Journals
Copyright
Copyright © 1993 by Springer-Verlag New York Inc.
Subject
Mathematics; Calculus of Variations and Optimal Control; Optimization; Systems Theory, Control; Theoretical, Mathematical and Computational Physics; Mathematical Methods in Physics; Numerical and Computational Physics, Simulation
ISSN
0095-4616
eISSN
1432-0606
DOI
10.1007/BF01314819
Publisher site
See Article on Publisher Site

Abstract

We propose and analyze a primal-dual, infinitesimal method for locating Nash equilibria of constrained, non-cooperative games. The main object is a family of nonstandard Lagrangian functions, one for each player. With respect to these functions the algorithm yields separately, in differential form, directions of steepest-descent in all decision variables and steepest-ascent in all multipliers. For convergence we need marginal costs to be monotone and constraints to be convex inequalities. The method is largely decomposed and amenable for parallel computing. Other noteworthy features are: non-smooth data can be accommodated; no projection or optimization is needed as subroutines; multipliers converge monotonically upward; and, finally, the implementation amounts, in essence, only to numerical integration.

Journal

Applied Mathematics and OptimizationSpringer Journals

Published: Feb 4, 2005

There are no references for this article.