Główna Fixed Point Theory in Ordered Sets and Applications: From Differential and Integral Equations to...

Fixed Point Theory in Ordered Sets and Applications: From Differential and Integral Equations to Game Theory

,
0 / 0
Jak bardzo podobała Ci się ta książka?
Jaka jest jakość pobranego pliku?
Pobierz książkę, aby ocenić jej jakość
Jaka jest jakość pobranych plików?

This monograph provides a unified and comprehensive treatment of an order-theoretic fixed point theory in partially ordered sets and its various useful interactions with topological structures. It begins with a discussion of some simple examples of the order-theoretic fixed point results along with simple applications from each of the diverse fields. The fixed point theory is then developed and preliminary results on multi-valued variational inequalities regarding the topological and order-theoretical structure of solution sets are covered. This is followed by more advanced material which demonstrates the power of the developed fixed point theory. In the treatment of the applications a wide range of mathematical theories and methods from nonlinear analysis and integration theory are applied; an outline of which has been given in an appendix chapter to make the book self-contained.

Graduate students and researchers in nonlinear analysis, pure and applied mathematics, game theory and mathematical economics will find this book useful.

Kategorie:
Rok:
2011
Wydanie:
1
Wydawnictwo:
Springer-Verlag New York
Język:
english
Strony:
477 / 492
ISBN 10:
1441975853
ISBN 13:
9781441975850
Plik:
PDF, 2.51 MB
Conversion to is in progress
Conversion to is failed
0 comments
 

Aby opublikować recenzję, zaloguj się lub zarejestruj się
Możesz zostawić recenzję książki i podzielić się swoimi doświadczeniami. Inni czytelnicy będą zainteresowani Twoją opinią na temat przeczytanych książek. Niezależnie od tego, czy książka ci się podoba, czy nie, jeśli powiesz im szczerze i szczegółowo, ludzie będą mogli znaleźć dla siebie nowe książki, które ich zainteresują.
Fixed Point Theory in Ordered Sets and Applications

Siegfried Carl  Seppo Heikkilä

Fixed Point Theory in
Ordered Sets and Applications
From Differential and Integral Equations
to Game Theory

Siegfried Carl
Martin-Luther-Universität
Halle-Wittenberg
Institut für Mathematik
Halle
Germany
siegfried.carl@mathematik.uni-halle.de

Seppo Heikkilä
Department of Mathematical Sciences
University of Oulu
Oulu
Finland
sheikki@sun3.oulu.fi

ISBN 978-1-4419-7584-3
e-ISBN 978-1-4419-7585-0
DOI 10.1007/978-1-4419-7585-0
Springer New York Dordrecht Heidelberg London
Mathematics Subject Classification Codes (2010): 06Axx, 06Bxx, 03F60, 28B05, 34Axx, 34Bxx,
34Gxx, 34Kxx, 35B51, 35J87, 35K86, 45N05,
46G12, 47H04, 47H10, 47J20, 49J40, 91A10,
91B16, 91B50, 58D25
© Springer Science+Business Media, LLC 2011
All rights reserved. This work may not be translated or copied in whole or in part without the
written permission of the publisher (Springer Science+Business Media, LLC, 233 Spring Street,
New York, NY 10013, USA), except for brief excerpts in connection with reviews or scholarly analysis.
Use in connection with any form of information storage and retrieval, electronic adaptation, computer
software, or by similar or dissimilar methodology now known or hereafter developed is forbidden.
The use in this publication of trade names, trademarks, service marks, and similar terms, even if they
are not identified as such, is not to be taken as an expression of opinion as to whether or not they are
subject to proprietary rights.
Printed on acid-free paper
Springer is part of Springer Science+Business Media (www.springer.com)

Dedicated in gratitude and high esteem to

Professor V. Lakshmikantham

Preface

Fixed point theory is one of the most powerful and fruitful tools of modern
mathematics and may be considered a core subject of nonlinear analysis. In
recent years a number of excellent monographs and surveys by distinguished
authors about fixed point theory have appeared such as, e.g., [2, 4, 7, 25, 31,
32, 100, 101, 103, 10; 4, 108, 155, 196]. Most of the books mentioned above deal
with fixed point theory related to continuous mappings in topological or even
metric spaces (work of Poincaré, Brouwer, Lefschetz–Hopf, Leray–Schauder)
and all its modern extensions.
This book focuses on an order-theoretic fixed point theory and its applications to a wide range of diverse fields such as, e.g., (multi-valued) nonlocal
and/or discontinuous partial differential equations of elliptic and parabolic
type, differential equations and integral equations with discontinuous nonlinearities in general vector-valued normed spaces of non-absolutely integrable
functions containing the standard Bochner integrable functions as special case,
and mathematical economics and game theory. In all these topics we are faced
with the central problem of handling the loss of continuity of mappings and/or
missing appropriate geometric and topological structure of their underlying
domain of definition. For example, it is noteworthy that, in particular, for
proving the existence of certain optimal strategies in game theory, there is
a need for purely order-related fixed point results in partially ordered sets
that are neither convex nor do they have lattice structure and where the fixed
point operator lacks continuity.
The aim of this monograph is to provide a unified and comprehensive
exposition of an order-theoretic fixed point theory in partially ordered sets
and its various useful interactions with topological structures. A characteristic
feature of this fixed point theory, which is developed in detail in Chap. 2, is
that it is based on an abstract recursion principle, called the Chain Generating
Recursion Principle, which was formulated in [112, 133], and which is the
common source of all the order-related fixed point results obtained in this
book. In particular, the developed fixed point theory includes the classical
order-theoretic fixed point result established by Knaster in [153], which was

VIII

Preface

later extended by Tarski in [215], as well as the fixed point theorems due to
Bourbaki and Kneser (cf. [228, Theorem 11.C]) and Amann (cf. [228, Theorem
11.D]). Surprisingly enough, very recently, the classical and seminal Knaster–
Tarski fixed point theorem has been applied to computational geometry in
[195]. This unexpected application emphasizes even more the importance of
an order-theoretic fixed point theory.
Chapter 1 serves as an introduction to the subject and discusses some
simple examples of the order-theoretic fixed point results along with simple
applications from each of the diverse fields. This will help the reader to get
some idea of the theory and its applications before entering the systematic
study. Chapter 3 provides preliminary results on multi-valued variational inequalities regarding the topological and order-theoretical structure of solution
sets. This chapter, which may be read independently, is of interest on its own
and contains new results. Our main emphasis is on Chaps. 4–8 where we
demonstrate the power of the developed fixed point theory of Chap. 2, which
runs like a thread through the entire book. Attempts have been made to attract a broad audience not only by the diverse fields of applications, but also
by emphasizing simple cases and ideas more than complicated refinements.
In the treatment of the applications, a wide range of mathematical theories
and methods from nonlinear analysis and integration theory are applied; an
outline of which has been given in an appendix chapter to make the book
self-contained.
This book is an outgrowth of the authors’ research on the subject during
the past 20 years. However, a great deal of the material presented here has
been obtained only in recent years and appears for the first time in book form.
We expect that our book will be accessible and useful to graduate students
and researchers in nonlinear analysis, pure and applied mathematics, game
theory, and mathematical economics.
We are most grateful to our friends and colleagues who contributed
through joint works and papers to the preparation of this book. Rather than
inadvertently leaving someone out, we have not listed the names, but we hope
our friends and collaborators will be satisfied with our thanks.
Finally, we wish to express our gratitude to the very professional editorial
staff of Springer, particularly to Vaishali Damle for her effective and productive collaboration.

Halle
Oulu
September 2010

Siegfried Carl
Seppo Heikkilä

Contents

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . VII
1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1

2

Fundamental Order-Theoretic Principles . . . . . . . . . . . . . . . . . . .
2.1 Recursions and Iterations in Posets . . . . . . . . . . . . . . . . . . . . . . . .
2.2 Fixed Point Results in Posets . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2.1 Fixed Points for Set-Valued Functions . . . . . . . . . . . . . . . .
2.2.2 Fixed Points for Single-Valued Functions . . . . . . . . . . . . .
2.2.3 Comparison and Existence Results . . . . . . . . . . . . . . . . . . .
2.2.4 Algorithmic Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.3 Solvability of Operator Equations and Inclusions . . . . . . . . . . . .
2.3.1 Inclusion Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.3.2 Single-Valued Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.4 Special Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.4.1 Fixed Point Results in Ordered Topological Spaces . . . .
2.4.2 Equations and Inclusions in Ordered Normed Spaces . . .
2.5 Fixed Point Results for Maximalizing Functions . . . . . . . . . . . . .
2.5.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.5.2 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.5.3 Examples and Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.6 Notes and Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

23
23
26
26
30
32
34
36
37
38
41
41
44
49
49
51
52
55

3

Multi-Valued Variational Inequalities . . . . . . . . . . . . . . . . . . . . . .
3.1 Introductory Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2 Multi-Valued Elliptic Variational Inequalities . . . . . . . . . . . . . . .
3.2.1 The Sub-Supersolution Method . . . . . . . . . . . . . . . . . . . . .
3.2.2 Directedness of Solution Set . . . . . . . . . . . . . . . . . . . . . . . .
3.2.3 Extremal Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2.4 Equivalence to Variational-Hemivariational Inequality . .
3.3 Multi-Valued Parabolic Variational Inequalities . . . . . . . . . . . . . .

57
57
62
66
77
85
88
92

X

Contents

3.3.1
3.3.2
3.3.3
3.4 Notes

Notion of Sub-Supersolution . . . . . . . . . . . . . . . . . . . . . . . . 98
Multi-Valued Parabolic Equation . . . . . . . . . . . . . . . . . . . . 101
Parabolic Variational Inequality . . . . . . . . . . . . . . . . . . . . . 117
and Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128

4

Discontinuous Multi-Valued Elliptic Problems . . . . . . . . . . . . . 131
4.1 Nonlocal and Discontinuous Elliptic Inclusions . . . . . . . . . . . . . . 131
4.1.1 Hypotheses, Main Result, and Preliminaries . . . . . . . . . . 132
4.1.2 Proof of Theorem 4.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
4.1.3 Extremal Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
4.1.4 Application: Difference of Clarke’s Gradient and
Subdifferential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
4.2 State-Dependent Clarke’s Gradient Inclusion . . . . . . . . . . . . . . . . 152
4.2.1 Statement of the Problem . . . . . . . . . . . . . . . . . . . . . . . . . . 152
4.2.2 Notions, Hypotheses, and Preliminaries . . . . . . . . . . . . . . 155
4.2.3 Existence and Comparison Result . . . . . . . . . . . . . . . . . . . 159
4.2.4 Application: Multiplicity Results . . . . . . . . . . . . . . . . . . . . 163
4.3 Discontinuous Elliptic Problems via Fixed Points for
Multifunctions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
4.3.1 Abstract Fixed Point Theorems for Multi-Functions . . . 166
4.3.2 Discontinuous Elliptic Functional Equations . . . . . . . . . . 168
4.3.3 Implicit Discontinuous Elliptic Functional Equations . . . 172
4.4 Notes and Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178

5

Discontinuous Multi-Valued Evolutionary Problems . . . . . . . 179
5.1 Discontinuous Parabolic Inclusions with Clarke’s Gradient . . . . 179
5.2 Implicit Functional Evolution Equations . . . . . . . . . . . . . . . . . . . . 184
5.2.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
5.2.2 Main Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
5.2.3 Generalization and Special Cases . . . . . . . . . . . . . . . . . . . . 190
5.2.4 Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
5.3 Notes and Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194

6

Banach-Valued Ordinary Differential Equations . . . . . . . . . . . . 197
6.1 Cauchy Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
6.1.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
6.1.2 A Uniqueness Theorem of Nagumo Type . . . . . . . . . . . . . 198
6.1.3 Existence Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
6.1.4 Existence and Uniqueness Results . . . . . . . . . . . . . . . . . . . 205
6.1.5 Dependence on the Initial Value . . . . . . . . . . . . . . . . . . . . . 209
6.1.6 Well-Posedness of a Semilinear Cauchy Problem . . . . . . . 210
6.2 Nonlocal Semilinear Differential Equations . . . . . . . . . . . . . . . . . . 212
6.2.1 Existence and Comparison Results . . . . . . . . . . . . . . . . . . . 213
6.2.2 Applications to Multipoint Initial Value Problems . . . . . 218
6.3 Higher Order Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . 219

Contents

XI

6.3.1 Well-Posedness Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
6.3.2 Semilinear Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
6.3.3 Extremal Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
6.4 Singular Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
6.4.1 First Order Explicit Initial Value Problems . . . . . . . . . . . 225
6.4.2 First Order Implicit Initial Value Problems . . . . . . . . . . . 230
6.4.3 Second Order Initial Value Problems . . . . . . . . . . . . . . . . . 235
6.4.4 Second Order Boundary Value Problems . . . . . . . . . . . . . 242
6.5 Functional Differential Equations Containing Bochner
Integrable Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
6.5.1 Hypotheses and Preliminaries . . . . . . . . . . . . . . . . . . . . . . . 250
6.5.2 Existence and Comparison Results . . . . . . . . . . . . . . . . . . . 254
6.6 Notes and Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
7

Banach-Valued Integral Equations . . . . . . . . . . . . . . . . . . . . . . . . . 261
7.1 Integral Equations in HL-Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
7.1.1 Fredholm Integral Equations . . . . . . . . . . . . . . . . . . . . . . . . 262
7.1.2 Volterra Integral Equations . . . . . . . . . . . . . . . . . . . . . . . . . 268
7.1.3 Application to Impulsive IVP . . . . . . . . . . . . . . . . . . . . . . . 272
7.1.4 A Volterra Equation Containing HL Integrable
Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
7.2 Integral Equations in Lp -Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
7.2.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
7.2.2 Urysohn Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
7.2.3 Fredholm Integral Equations . . . . . . . . . . . . . . . . . . . . . . . . 284
7.2.4 Volterra Integral Equations . . . . . . . . . . . . . . . . . . . . . . . . . 292
7.2.5 Application to Impulsive IVP . . . . . . . . . . . . . . . . . . . . . . . 296
7.3 Evolution Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
7.3.1 Well-Posedness Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
7.3.2 Existence and Uniqueness Result . . . . . . . . . . . . . . . . . . . . 300
7.3.3 Continuous Dependence on x0 . . . . . . . . . . . . . . . . . . . . . . 302
7.3.4 Special Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
7.3.5 Application to a Cauchy Problem . . . . . . . . . . . . . . . . . . . 305
7.3.6 Extremal Solutions of Evolution Equations . . . . . . . . . . . 305
7.3.7 Evolution Equations Containing Bochner Integrable
Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
7.3.8 Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
7.4 Notes and Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315

8

Game Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
8.1 Pure Nash Equilibria for Finite Simple Normal-Form Games . . 319
8.1.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
8.1.2 Existence and Comparison Results . . . . . . . . . . . . . . . . . . . 320
8.1.3 An Application to a Pricing Game . . . . . . . . . . . . . . . . . . . 323

XII

Contents

8.2 Pure and Mixed Nash Equilibria for Finite Normal-Form
Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
8.2.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
8.2.2 Existence Result for the Greatest Nash Equilibrium . . . . 326
8.2.3 Comparison Result for Utilities . . . . . . . . . . . . . . . . . . . . . 329
8.2.4 Dual Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
8.2.5 Applications to Finite Supermodular Games . . . . . . . . . . 331
8.2.6 Application to a Multiproduct Pricing Game . . . . . . . . . . 336
8.3 Pure Nash Equilibria for Normal-Form Games . . . . . . . . . . . . . . 338
8.3.1 Extreme Value Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
8.3.2 Smallest and Greatest Pure Nash Equilibria . . . . . . . . . . 343
8.3.3 Special Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
8.3.4 Applications to a Multiproduct Pricing Game . . . . . . . . . 355
8.3.5 Minimal and Maximal Pure Nash Equilibria . . . . . . . . . . 359
8.4 Pure and Mixed Nash Equilibria of Normal-Form Games . . . . . 364
8.4.1 Definitions and Auxiliary Results . . . . . . . . . . . . . . . . . . . . 365
8.4.2 Existence and Comparison Results . . . . . . . . . . . . . . . . . . . 369
8.4.3 Applications to Supermodular Games . . . . . . . . . . . . . . . . 373
8.5 Undominated and Weakly Dominating Strategies and Weakly
Dominating Pure Nash Equilibria for Normal-Form Games . . . 379
8.5.1 Existence of Undominated Strategies . . . . . . . . . . . . . . . . . 379
8.5.2 Existence of Weakly Dominating Strategies and Pure
Nash Equilibria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
8.5.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
8.6 Pursuit and Evasion Game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
8.6.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
8.6.2 Winning Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
8.6.3 Applications and Special Cases . . . . . . . . . . . . . . . . . . . . . . 393
8.7 Notes and Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
9

Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
9.1 Analysis of Vector-Valued Functions . . . . . . . . . . . . . . . . . . . . . . . 401
9.1.1 µ-Measurability and µ-Integrability of Banach-Valued
Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
9.1.2 HL Integrability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
9.1.3 Integrals of Derivatives of Vector-Valued Functions . . . . 412
9.1.4 Convergence Theorems for HL Integrable Functions . . . . 416
9.1.5 Ordered Normed Spaces of HL Integrable Functions . . . 419
9.2 Chains in Ordered Function Spaces . . . . . . . . . . . . . . . . . . . . . . . . 421
9.2.1 Chains in Lp -Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421
9.2.2 Chains of Locally Bochner Integrable Functions . . . . . . . 424
9.2.3 Chains of HL Integrable and Locally HL Integrable
Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426
9.2.4 Chains of Continuous Functions . . . . . . . . . . . . . . . . . . . . . 429
9.2.5 Chains of Random Variables . . . . . . . . . . . . . . . . . . . . . . . . 432

Contents

9.3

9.4

9.5

9.6

XIII

9.2.6 Properties of Order Intervals and Balls in Ordered
Function Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433
Sobolev Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
9.3.1 Definition of Sobolev Spaces . . . . . . . . . . . . . . . . . . . . . . . . 436
9.3.2 Chain Rule and Lattice Structure . . . . . . . . . . . . . . . . . . . 438
Operators of Monotone Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
9.4.1 Main Theorem on Pseudomonotone Operators . . . . . . . . 440
9.4.2 Leray–Lions Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
9.4.3 Multi-Valued Pseudomonotone Operators . . . . . . . . . . . . . 443
First Order Evolution Equations . . . . . . . . . . . . . . . . . . . . . . . . . . 447
9.5.1 Evolution Triple and Generalized Derivative . . . . . . . . . . 447
9.5.2 Existence Results for Evolution Equations . . . . . . . . . . . . 451
Calculus of Clarke’s Generalized Gradient . . . . . . . . . . . . . . . . . . 452

List of Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475

1
Introduction

In this introductory chapter we give a short account of the contents of the
book and discuss simple notions and examples of the fixed point theory to be
developed and applied to more involved applications in later chapters. As an
introduction to the fixed point theory and its applications let us recall two
fixed point theorems on a nonempty closed and bounded subset P of Rm , one
purely topological (Brouwer’s fixed point theorem) and one order-theoretically
based. A point x ∈ P is called a fixed point of a function G : P → P if x = Gx.
We assume that Rm is equipped with Euclidean metric.
Theorem 1.1 (Brouwer’s Fixed Point Theorem). If P is a closed,
bounded, and convex subset of Rm , then every continuous function G : P → P
has a fixed point.
To formulate the purely order-theoretic fixed point theorem we equip Rm with
the coordinatewise partial order ’≤’, i.e., for x, y ∈ Rm , we define x ≤ y if
and only if xi ≤ yi , i = 1, . . . , m. A function G : P → P is called increasing if
x ≤ y implies Gx ≤ Gy. Further, we will need the notion of a sup-center of
the set P , which is defined as follows: A point c ∈ P is called a sup-center of
P if sup{c, x} ∈ P for each x ∈ P . The next fixed point theorem is a special
case of Corollary 2.41(a) of Chap. 2.
Theorem 1.2. If P is a closed and bounded subset of Rm having a sup-center,
then every increasing function G : P → P has a fixed point.
Note that in Theorem 1.2 neither continuity of the fixed point operator nor
convexity of the set P is needed. Let us give two examples of sets P that
have the required properties of Theorem 1.2. The geometrical center c =
(c1 , . . . , cm ) ∈ Rm of every set
P = {(x1 , . . . , xm ) ∈ Rm :

m
X

|xi − ci |p ≤ rp }, p, r ∈ (0, ∞),

(1.1)

i=1

S. Carl and S. Heikkilä, Fixed Point Theory in Ordered Sets and Applications:
From Differential and Integral Equations to Game Theory,
DOI 10.1007/978-1-4419-7585-0_1, © Springer Science+Business Media, LLC 2011

1

2

1 Introduction

NR

1
@
@

r
r

@
r
@
@

r

@
@

R
I

@
−1

r

@
@

r

1

@
r @
@

r

@
@
@
−1

Fig. 1.1. Closed Bounded Set in R2 with (0, 0) as Sup-Center

is a sup-center of P . Because these sets are also closed and bounded, then
every increasing mapping G : P → P has a fixed point. Notice that P is
not convex if 0 < p < 1, as assumed in Theorem 1.1. If P has the smallest
element c, then c is a sup-center of P . If m = 2, a necessary and sufficient
condition for a point c = (c1 , c2 ) of P to be a sup-center of P is that whenever
a point y = (y1 , y2 ) of P and c are unordered, then (y1 , c2 ) ∈ P if y2 < c2 and
(c1 , y2 ) ∈ P if y1 < c1 . The second example of a set P ⊂ R2 is illustrated by
Fig. 1.1, where P consists of all the solid lines and the isolated points. One
easily verifies that c = (0, 0) is a sup-center.
Theorem 1.1 and Theorem 1.2 can be applied, e.g., in the study of the
solvability of a finite system of equations. For simplicity consider the system
u = u(x, y),

v = v(x, y).

(1.2)

Assume that P is a closed and bounded subset of R2 , and that G = (u, v)
maps P into itself. By Theorem 1.1 the system (1.2) has a solution if G is

1 Introduction

3

continuous and P is convex. But there is no constructive method to solve
system (1.2) under these hypotheses. By Theorem 1.2 the system (1.2) has
a solution if G is increasing and P is only assumed to possess a sup-center.
As we shall see in Chap. 2 the proof of Theorem 1.2 is constructive. In the
special case when strictly monotone sequences of the image G[P ] are finite,
the following algorithm can be applied to obtain a solution of (1.2) when
the sup-center of P is c = (c1 , c2 ). Maple commands ‘fi;od’ in the following
program mean ‘end if;end do’.
u := u(x, y) : v := v(x, y) : x := c1 : y := c2 :
for k from 0 while abs(u − x) + abs(v − y) > 0 do;
if (u − x)(v − y) < 0 then x := max{x, u} : y := max{y, v}
else x := u : y := v:fi;od;
sol := (x, y);
It is shown in Chap. 2 that the above algorithm can be applied to approximate
a solution of (1.2) in the case when G is continuous and increasing, replacing
G by its suitable upper and lower estimates.
Consider next generalizations of Theorem 1.1 and Theorem 1.2 to the case
when P is a nonempty subset of an infinite-dimensional normed space E. The
generalization of Brouwer’s fixed point theorem to infinite-dimensional Banach spaces requires the compactness of the fixed point operator. As compact
operators play a central role also in later chapters we recall their definition
here for convenience, see, e.g., [62, 228].
Definition 1.3. Let X and Y be normed spaces, and T : D(T ) ⊆ X → Y
an operator with domain D(T ). The operator T is called compact iff T is
continuous, and T maps bounded sets into relatively compact sets. Compact
operators are also called completely continuous.
In Theorem 1.5 we assume that E is ordered by a closed and convex cone E+
for which −E+ ∩ E+ = {0}. A subset A of P is said to have a sup-center in
P if there exists a c ∈ P such that sup{c, x} exists in E and belongs to P for
every x ∈ A.
Theorem 1.4 (Schauder’s Fixed Point Theorem). Let P be a nonempty,
closed, bounded, and convex subset of the Banach space E, and assume that
G : P → P is compact. Then G has a fixed point.
Theorem 1.5 ([116]). Let P be a subset of an ordered normed space E, and
let G : P → P be increasing. If the weak closure of G[P ] has a sup-center in
P , and if monotone sequences of G[P ] have weak limits in P , then G has a
fixed point.
If P is, e.g., the closed unit ball in l2 defined by
l2 = {x = (x1 , x2 , . . . ) :

∞
X
i=1

|xi |2 < ∞},

4

1 Introduction

then the conclusion of Theorem 1.4 does not hold if G : P → P is only
assumed to be continuous (see Kakutani’s counterexample). Thus the result
of Theorem 1.4 is not valid if the compactness hypothesis of G is missing.
On the other hand, no compactness or continuity is assumed in Theorem 1.5,
which is also a consequence of Proposition 2.40(a). The geometrical centers
of bounded and closed balls of p-normed spaces lp , ordered coordinatewise,
and Lp (Ω), 1 ≤ p < ∞, ordered a.e. pointwise, are their sup-centers. This
is true also for closed and bounded balls of Sobolev spaces W 1,p (Ω) and
W01,p (Ω), 1 < p < ∞, ordered a.e. pointwise. Moreover, these balls are weakly
sequentially closed and their monotone sequences have weak limits. Hence, if
P is any of these balls, then every increasing function G : P → P has a fixed
point by Theorem 1.5. To demonstrate the applicability of Theorem 1.4 and
Theorem 1.5 let us consider two simple examples of elliptic Dirichlet boundary
value problems with homogeneous boundary values.
Example 1.6.
−∆u(x) = f (x, u(x))

in Ω,

u=0

on ∂Ω,

(1.3)

where Ω ⊂ RN is a bounded domain with Lipschitz boundary ∂Ω. Let us
assume that f satisfies the following conditions:
(f1) f : Ω × R → R is a Carathéodory function, i.e., x 7→ f (x, s) is measurable
in Ω for all s ∈ R, and s 7→ f (x, s) is continuous for almost all (a.a.)
x ∈ Ω.
(f2) The function f fulfills the following growth condition: there is a function
k ∈ L2+ (Ω) and a positive constant a such that for a.a. x ∈ Ω and for all
s ∈ R we have
|f (x, s)| ≤ k(x) + a|s|.
By L2+ (Ω) we denote the positive cone of all nonnegative functions of L2 (Ω).
Setting V0 = W01,2 (Ω), V0∗ its dual space, A = −∆, and defining A : V0 → V0∗
by
Z
hAu, ϕi =

∇u∇ϕ dx, ∀ ϕ ∈ V0 ,
Ω

then A : V0 → V0∗ is a strongly monotone, bounded, and continuous operator.
Denoting by F the Nemytskij operator associated with f by
F (u)(x) = f (x, u(x)),
then, in view of (f1)–(f2), F : L2 (Ω) → L2 (Ω) is continuous and bounded.
The compact embedding i : V0 ,→ L2 (Ω) readily implies that the operator
F = i∗ ◦ F ◦ i : V0 → V0∗ (i∗ is the adjoint operator of i) given by
Z
F (u) ϕ dx, ∀ ϕ ∈ V0
hF(u), ϕi =
Ω

1 Introduction

5

is completely continuous. With these notations the weak solution of (1.3) can
be given the following form: Find u ∈ V0 such that
Au − F(u) = 0

in V0∗ .

(1.4)

Since F : V0 → V0∗ is completely continuous and bounded, and A : V0 → V0∗ is
strongly monotone, continuous, and bounded, it follows that A − F : V0 → V0∗
is, in particular, continuous, bounded, and pseudomonotone. The classical
theory on pseudomonotone operators due to Brezis and Browder (see, e.g.,
[229]) ensures that if A − F : V0 → V0∗ is, in addition, coercive, then A − F :
V0 → V0∗ is surjective, which means that (1.4) has a solution, i.e., (1.3) has
a weak solution. A sufficient condition to ensure coerciveness of A − F is
that the positive constant a in (f2) satisfies a < λ1 , where λ1 is the first
Dirichlet eigenvalue of A = −∆, which is known to be positive and simple,
see [6]. This can readily be verified by using (f2) and the following variational
characterization of the first eigenvalue λ1 by
R
|∇v|2 dx
RΩ
λ1 = inf
.
06=v∈V0
|v|2 dx
Ω
Now we estimate as follows
Z
hAu − F(u), ui ≥

|∇u|2 dx − kkk2 kuk2 − akuk22

Ω


a
kkk2
≥ 1−
k∇uk22 − √ k∇uk2 ,
λ1
λ1
where k · k2 = k · kL2 (Ω) . As kuk = k∇uk2 is an equivalent norm in V0 , we see
from the last estimate that
1
hAu − F(u), ui → ∞
k∇uk2

as k∇uk2 → ∞,

which proves the coercivity, and thus the existence of solutions of (1.4).
An alternative approach to the existence proof for (1.4) that is closely
related to the pseudomonotone operator theory is based on Schauder’s fixed
point theorem (see Theorem 1.4). To this end, problem (1.4) is transformed
into a fixed point equation as follows: As A = −∆ : V0 → V0∗ is a linear,
strongly monotone, and bounded operator, it follows that the inverse A−1 :
V0∗ → V0 is linear and bounded, which allows us to rewrite (1.4) in the form:
Find u ∈ V0 such that
u = A−1 ◦ F(u)
(1.5)
holds, i.e., that u ∈ V0 is fixed point of the operator
G = A−1 ◦ F.
Since under hypotheses (f1)–(f2), F : V0 → V0∗ is completely continuous, and
A−1 : V0∗ → V0 is linear and bounded, it readily follows that G : V0 → V0∗ is

6

1 Introduction

continuous and compact. In order to apply Schauder’s theorem we are going to
verify that under the same assumption on a, i.e., a < λ1 , G maps a closed ball
B(0, R) ⊂ V0 into itself, which finally allows us to apply Schauder’s theorem,
and thus the existence of solutions of (1.4). Let v ∈ B(0, R), and denote
u = Gv. Then, by definition of the operator G, u ∈ V0 satisfies
Z
Z
∇u∇ϕ dx =
F (v) ϕ dx, ∀ ϕ ∈ V0 .
Ω

Ω

In particular, the last equation holds for u = ϕ, which yields
Z
2
F (v) u dx ≤ kF (v)k2 kuk2 ≤ kkk2 kuk2 + akvk2 kuk2
k∇uk2 =
Ω

a
kkk2
≤
k∇vk2 k∇uk2 + √ k∇uk2 ,
λ1
λ1
which yields (note u = Gv) the norm estimate in V0
kGvkV0 ≤

a
kkk2
k∇vk2 + √ , ∀ v ∈ V0 ,
λ1
λ1

where kukV0 := k∇uk2 . Thus if R > 0 is chosen in such a way that
kkk2
a
R + √ ≤ R,
λ1
λ1
then G provides a mapping of B(0, R) into itself. Such an R always exists,
because λa1 < 1. This completes the existence proof via Schauder’s fixed point
theorem.
Schauder’s theorem fails if F : V0 → V0∗ lacks compactness, which may occur,
e.g., when in (f2) a critical growth of the form
|f (x, s)| ≤ k(x) + a|s|2

∗

−1

is allowed, where 2∗ is the critical Sobolev exponent. Lack of compactness
occurs also if (1.3) is studied in unbounded domains, or if s 7→ f (x, s) is no
longer continuous. It is Theorem 1.5 that allows us to deal with these kinds
of problems provided the fixed point operator G is increasing. In particular,
if only continuity of G is violated, then neither monotone operator theory
in the sense of Brezis–Browder–Lions–Minty nor fixed point theorems that
assume as a least requirement the continuity of the fixed point operator can
be applied. To give a simple example, where standard methods fail, consider
the next example.
Example 1.7. Let Ω be as in the example before. We study the following discontinuous Dirichlet boundary value problem:

1 Introduction

−∆u(x) = a[u(x)] + k(x)

in Ω,

u=0

on ∂Ω,

7

(1.6)

where a > 0 is some constant, k ∈ L2 (Ω), and s 7→ [s] stands for the integer
function, i.e., [s] denotes the greatest integer with [s] ≤ s. Apparently, in this
case f (x, s) := a[s] + k(x) is discontinuous in s ∈ R. Set k̃(x) = |k(x)| + 1,
then we have k̃ ∈ L2+ (Ω), and the following estimate holds
|f (x, s)| ≤ k̃(x) + a|s|.
Due to the structure of f the Nemytskij operator F : L2 (Ω) → L2 (Ω) is still
well defined and bounded, however, F is no longer continuous. With the same
notation as in Example 1.6 we can transform the elliptic problem (1.6) into
the fixed point equation in V0 of the form
u = A−1 ◦ F(u).

(1.7)

The same estimate as in the previous example shows that the fixed point
operator G = A−1 ◦ F maps a ball B(0, R̃) ⊂ V0 into itself provided a < λ1 ,
and R̃ > 0 is sufficiently large. Note, however, that the fixed point operator is
no longer continuous. Now, we easily observe that G : V0 → V0 is increasing
with respect to the underlying natural partial order in V0 defined via the order
cone L2+ (Ω). The latter is a simple consequence of the fact that F : V0 → V0∗
is increasing, and because of the inverse monotonicity of A−1 , which is a
consequence of the maximum principle for the Laplacian. Taking into account
the comments after Theorem 1.5, we may apply Theorem 1.5 to ensure that
G has a fixed point, which proves the existence of weak solutions for (1.6)
provided 0 < a < λ1 . It should be noted that the classical fixed point results
for increasing self-mappings due to Amann, Tarski, and Bourbaki (see [228])
cannot be applied here.
Further applications of Theorem 1.5 to deal with elliptic problems that
lack compactness are demonstrated in [48], where we prove existence results
for elliptic problems with critical growth or discontinuity of the data.
The results of Theorem 1.4 and Theorem 1.5 can be extended to set-valued
(also called multi-valued) mappings. Let us assume that P is a nonempty
subset of a topological space X. In Theorem 1.9 we assume that X is equipped
with such a partial ordering that the order intervals [a, b] = {x ∈ X : a ≤ x ≤
b} are closed. Denote by 2P the set of all subsets of P . An element x of P is
called a fixed point of a set-valued mapping F : P → 2P if x ∈ F(x). We say
that F is increasing if, whenever x ≤ y in P , then for every z ∈ F(x) there
exists a w ∈ F(y) such that z ≤ w, and for every w ∈ F(y) there exists a
z ∈ F (x) such that z ≤ w.
Theorem 1.8 (Generalized Theorem of Kakutani). A multi-valued function F : P → 2P has a fixed point if P is a nonempty, compact, and convex set
in a locally convex Hausdorff space X, F : P → 2P is upper semi-continuous,
and if the set F(x) is nonempty, closed, and convex for all x ∈ P .

8

1 Introduction

The following theorem is a consequence of Theorem 2.12, which is proved
in Chap. 2.
Theorem 1.9. A multi-valued function F : P → 2P has a fixed point if F is
increasing, its values F(x) are nonempty and compact for all x ∈ P , chains
of F[P ] have supremums and infimums, and if F[P ] has a sup-center in P .
In particular, if P is any set defined in (1.1), then every increasing mapping
F : P → 2P whose values are nonempty closed subsets of Rm has a fixed
point by Theorem 1.9. As a further consequence of Theorem 1.9 one gets
the following order-theoretic fixed point result in infinite-dimensional ordered
Banach spaces, which is useful in applications to discontinuous differential
equations (see Theorem 4.37).
Theorem 1.10. Let P be a closed and bounded ball in a reflexive latticeordered Banach space X, and assume that kx+ k = k sup{0, x}k ≤ kxk holds
for all x ∈ X. Then every increasing mapping F : P → 2P , whose values are
nonempty and weakly sequentially closed, has a fixed point.
To give an idea of how Theorem 1.10 can be applied to differential equations, let us consider a simple example.
Example 1.11. Consider the following slightly extended version of problem
(1.6):
−∆u(x) = a[u(x)] + g(x, u(x)) + k(x)

in Ω,

u = 0 on ∂Ω,

(1.8)

where g : Ω × R → R is a Carathéodory function with the following growth
condition
(g) There exist a positive constant b with b < λ1 − a, and a h ∈ L2 (Ω), such
that for a.a. x ∈ Ω and for all s ∈ R
|g(x, s)| ≤ h(x) + b|s|
holds. Here a and λ1 are as in Example 1.7
If we rewrite the right-hand side of equation (1.8) in the form
f (x, s, r) := a[r] + g(x, s) + k(x),

(1.9)

then we can distinguish between the continuous and discontinuous dependence
of the right-hand side of (1.8). This allows an approach toward the existence
of solutions of (1.8) by means of the multi-valued fixed point Theorem 1.9.
Note, s 7→ f (x, s, r) is continuous, and r 7→ f (x, s, r) is discontinuous and
monotone increasing. Let v ∈ V0 be fixed, and consider the boundary value
problem
−∆u(x) = f (x, u(x), v(x))

in Ω,

u = 0 on ∂Ω.

(1.10)

1 Introduction

9

As the function (x, s) 7→ f (x, s, v(x)) with f defined in (1.9) is a Carathéodory
function, one can apply the same approach as in Example 1.6 to get the
existence of solutions for (1.10). For fixed v ∈ V0 , denote now by Gv the set
of all solutions of (1.10). This provides a multi-valued mapping G : V0 → 2V0 ,
and certainly any fixed point of G is a solution of the original boundary
value problem (1.8), and vice versa. By similar estimates as in Examples
1.6 and 1.7 one can show that under the given assumptions, in particular
due to 0 < a + b < λ1 , there is a closed ball B(0, R) ⊂ V0 such that the
multi-valued mapping G maps B(0, R) into itself. As V0 is a reflexive latticeordered Banach space satisfying ku+ k = k sup{0, u}k ≤ kuk for all u ∈ V0 ,
for G : B(0, R) → 2B(0,R) to possess a fixed point it is enough to show that
G : B(0, R) → 2B(0,R) is increasing, and that the images Gv are weakly
sequentially closed, see Theorem 4.37. This will be demonstrated for more
involved elliptic problems in Chap. 4.
Chapter 3 is devoted to comparison principles for, in general, multi-valued
elliptic and parabolic variational inequalities, with an account of the main
differences between them. Elliptic multi-valued variational inequalities of the
following kind are considered: Let K ⊆ W 1,p (Ω) be a closed convex set. Find
u ∈ K, η ∈ Lq (Ω), and ξ ∈ Lq (∂Ω) satisfying:
η(x) ∈ ∂j1 (x, u(x)), a.e. x ∈ Ω, ξ(x) ∈ ∂j2 (x, γu(x)), a.e. x ∈ ∂Ω, (1.11)
Z
Z
η (v − u) dx +
ξ (γv − γu) dσ ≥ 0, ∀ v ∈ K, (1.12)
hAu − h, v − ui +
Ω

∂Ω

where s 7→ ∂jk (x, s) are given by Clarke’s generalized gradient of locally
Lipschitz functions s 7→ jk (x, s), k = 1, 2, γ is the trace operator, and A
is some quasilinear elliptic operator of Leray–Lions type. As for parabolic
multi-valued variational inequalities, the underlying solution space is
W = {u ∈ X : ∂u/∂t ∈ X ∗ },
where X = Lp (0, τ ; W 1,p (Ω)), and X ∗ is its dual space. Consider the time∂
derivative L = ∂t
: D(L) → X ∗ as an operator from the domain D(L) to X ∗
where D(L) is given by
D(L) = {u ∈ W : u(0) = 0},
and let K ⊆ X be closed and convex. The following general class of multivalued parabolic variational inequalities is treated in Chap. 3: Find u ∈ K ∩
D(L), η ∈ Lq (Q), and ξ ∈ Lq (Γ ) satisfying:
(1.13)
η(x, t) ∈ ∂j1 (x, t, u(x, t)), for a.e. (x, t) ∈ Q,
ξ(x, t) ∈ ∂j2 (x, t, γu(x, t)), for a.e. x ∈ Γ, and
(1.14)
Z
Z
hLu+Au−h, v−ui+ η (v−u) dxdt+ ξ (γv−γu) dΓ ≥ 0, ∀ v ∈ K, (1.15)
Q

Γ

10

1 Introduction

where Q = Ω × (0, τ ) and Γ = ∂Ω × (0, τ ). For both problems (1.11)–(1.12)
and (1.13)–(1.15) we establish existence and comparison results in terms of
appropriately defined sub- and supersolutions, and characterize their solution
sets topologically and order-theoretically. We are demonstrating by a number
of examples that the variational inequality problems (1.11)–(1.12) and (1.13)–
(1.15) include a wide range of specific elliptic and parabolic boundary value
problems and variational inequalities. In this sense, Chap. 3 is not only a
prerequisite for Chaps. 4 and 5, but it is of interest on its own and can be
read independently.
In Chaps. 4 and 5 we apply the fixed point results of Chap. 2 combined
with the comparison results of Chap. 3 to deal with discontinuous single and
multi-valued elliptic and parabolic problems of different kinds. In particular, we consider nonlocal, discontinuous elliptic and parabolic boundary value
problems and multi-valued elliptic problems with discontinuously perturbed
Clarke’s generalized gradient. In the study of those problems, besides fixed
point and comparison results, the existence of extremal solutions of certain associated auxiliary problems play an important role. Extremal solution results
that are proved in Chap. 3 require rather involved techniques. These results
are used to transform a given multi-valued elliptic or parabolic problem into
a fixed point equation.
Differential and integral equations treated in Sects. 6.1–6.4 and 7.1–7.2
contain functions that are Henstock–Lebesgue (HL) integrable with respect
to the independent variable. A function g from a compact real interval [a, b] to
a Banach space E is called HL integrable if there is a function f : [a, b] → E,
called a primitive of g, with the following property: To each  > 0 there corresponds such a function δ : [a, b] → (0, ∞) that whenever [a, b] = ∪m
i=1 [ti−1 , ti ]
and ξi ∈ [ti−1 , ti ] ⊂ (ξi − δ(ξi ), ξi + δ(ξi )) for all i = 1, . . . , m, then
m
X

kf (ti ) − f (ti−1 ) − g(ξi )(ti − ti−1 )k < .

(1.16)

i=1

Criteria for HL integrability that are sufficient in most of our applications are
given by the following lemma.
Lemma 1.12. Given a function g : [a, b] → E, assume there exists a continuous function f : [a, b] → E and a countable subset Z of [a, b] such that
f 0 (t) = g(t) for all t ∈ [a, b] \ Z. Then g is HL integrable on [a, b], and f is a
primitive of g.
Proof: Since Z is countable, it can be represented in the form Z = {xj }j∈N .
Let  > 0 be given. Since f is continuous, and the values of g have finite norms,
then for every xj ∈ Z there exists a δ(xj ) > 0 such that kf (t̄)−f (t)k < 2−j−1 ,
and kg(xj )k(t̄ − t) < 2−j−1  whenever xj − δ(xj ) < t ≤ xj ≤ t̄ < xj + δ(xj ).
To each ξ ∈ [a, b] \ Z there corresponds, since f 0 (ξ) exists, such a δ(ξ) > 0
that kf (t̄) − f (t) − f 0 (ξ)(t̄ − t)k < (t̄ − t)/(b − a) whenever ξ ∈ [t, t̄] ⊂

1 Introduction

11

(ξ − δ(ξ), ξ + δ(ξ)). Consequently, if a = t0 < t1 < · · · < tm = b, and if
ξi ∈ [ti−1 , ti ] ⊂ (ξi − δ(ξi ), ξi + δ(ξi )) for all i = 1, . . . , m, then
m
X

kf (ti ) − f (ti−1 ) − g(ξi )(ti − ti−1 )k

i=1

X

≤

(kf (ti ) − f (ti−1 )k + kg(xj )k(ti − ti−1 ))

ξi =xj ∈Z

+

X

kf (ti ) − f (ti−1 ) − f 0 (ξi )(ti − ti−1 )k ≤ 2.

ξi ∈[a,b]\Z

Thus g is HL integrable and f is its primitive.

t
u

Remark 1.13. If the set Z in Lemma 1.12 is uncountable, an extra condition,
called the Strong Lusin Condition (see Chap. 9), is needed to ensure HL
integrability.
Compared with Lebesgue and Bochner integrability, the definition of HL
integrability is easier to understand because no measure theory is needed.
Moreover, all Bochner integrable (i.e., in real-valued case Lebesgue integrable)
functions are HL integrable, but not conversely. For instance, HL integrability
encloses improper integrals. Consider the real-valued function f defined on
[0, 1] by f (0) = 0 and f (t) = t2 cos(1/t2 ) for t ∈ (0, 1]. This function is
differentiable on [0, 1], whence f 0 is HL integrable by Lemma 1.12. However,
f 0 is not Lebesgue integrable on [0, 1]. More generally, let t be called a singular
point of the domain interval of a real-valued function that is not Lebesgue
integrable on any subinterval that contains t. Then (cf. [167]) there exist “HL
integrable functions on an interval that admit a set of singular points with its
measure as close as possible but not equal to that of the whole interval.”
If g is HL integrable on [a, b], it is HL integrable on every closed subinterval
[c, d] of [a, b]. The Henstock–Kurzweil integral of g over [c, d] is defined by
Z d
K
g(s) ds := f (d) − f (c), where f is a primitive of g.
c

The main advantage of the Henstock–Kurzweil integral is its applicability for
integration of highly oscillatory functions that occur in quantum theory and
nonlinear analysis. This integral provides a tool to construct a rigorous mathematical formulation for Feynman’s path integral, which plays an important
role in quantum physics (see, e.g., [143, 182]).
On the other hand, as stated in [98, p.13], the most important factor preventing a widespread use of the Henstock–Kurzweil integral in engineering,
mathematics, and physics has been the lack of a natural Banach space structure for the class of HL integrable functions, even in the case when E = R.
However, if E is ordered, the validity of the dominated and monotone convergence theorems, which we prove for order-bounded sequences of HL integrable functions (see Chap. 9), considerably improve the applicability of the

12

1 Introduction

Henstock–Kurzweil integral in Nonlinear Analysis. Combined with fixed point
theorems in ordered normed spaces presented in Chap. 2, these convergence
theorems provide effective tools to solve differential and integral equations
that contain HL integrable valued functions and discontinuous nonlinearities.
All this will be discussed in detail in Chaps. 6 and 7 and shows once more the
importance of the order structure of the underlying function spaces. In particular, the above stated lack of a Banach space structure causes no problems
in our studies. Moreover, as the following simple example shows, the ordering
allows us to determine the smallest and greatest solutions of such equations.
Example 1.14. Determine the smallest and the greatest continuous solutions
of the following Cauchy problem:
y 0 (t) = q(t, y(t), y) for a.e. t ∈ J := [0, 4],

y(0) = 0,

where

q(t, x, y) = p(t) sgn(x) + h(y)(1 + cos(t)),



1

 1 
1


1


p(t)
=
cos
+
sgn
cos
sin
,


t
t
t
t 


 1,

Z 4


0,
y(s) ds , sgn(x) =
h(y) = 2 arctan




1

−1,




[x] = max{n ∈ Z : n ≤ x}.

x > 0,
x = 0,
x < 0,

(1.17)

(1.18)

Note that the bracket function, called the greatest integer function, occurs in
the function h.
Solution: If y ∈ C(J, R) and y(t) > 0 when t > 0, then sgn(y(t)) = 1 when
t > 0. Thus
q(t, y(t), y) = qy (t) := p(t) + h(y)(1 + cos(t)),
The function fy : J → R, defined by
1
+ h(y)(t + sin(t)),
fy (0) = 0, fy (t) = t cos
t

t ∈ J.

t ∈ (0, 4],

1
, n ∈ N0 . Thus qy
is continuous, and fy0 (t) = qy (t) if t ∈ (0, 4] and t 6= (2n+1)π
is HL integrable on J and fy is its primitive by Lemma 1.12. This result and
the definitions of fy , qy and the Henstock–Kurzweil integral imply that
Z t
Gy(t) := K q(s, y(s), y) ds = fy (t), t ∈ J.
0


R
4
≤ 3 for every y ∈ C(J, R). Thus,
Moreover, h(y) = 2 arctan 1 y(s) ds


defining

1 Introduction

y ∗ (t) = t cos

1
t

13

+ 3(t + sin(t)), t ∈ (0, 4], y∗ (0) = 0,

then fy (t) ≤ y ∗ (t) for all t ∈ J and y ∈ C(J, R).
On the other hand, it is easy
R

4
to show that h(y ∗ ) = 2 arctan 1 y ∗ (s) ds = 3. Consequently,
y ∗ (t) = fy∗ (t) = Gy ∗ (t),

t ∈ J.

It follows from the above equation by differentiation that
(y ∗ )0 (t) = fy0 ∗ (t) = qy∗ (t) = q(t, y∗ (t), y ∗ ),

t ∈ (0, 4], t 6=

1
, n ∈ N0 .
(2n + 1)π

Moreover y ∗ (0) = 0, so that y ∗ is a solution of problem (1.17). The above
reasoning shows also that if y ∈ C(J, R) is a solution of problem (1.17), then
y(t) ≤ y∗ (t) for every t ∈ J. Thus y ∗ is the greatest continuous solution of
problem (1.17).
By similar reasoning one can show that the smallest solution of the Cauchy
problem (1.17) is
y∗ (t) = −t cos

1
t

− 4(t + sin(t)), t ∈ (0, 4], y∗ (0) = 0.

The function (t, x, y) 7→ q(t, x, y), defined in (1.18), has the following properties.
•

It is HL integrable, but it is neither Lebesgue integrable nor continuous
with respect to the independent variable t if x 6= 0, because p is not
Lebesgue integrable.
• Its dependence on all the variables t, x, and y is discontinuous, since the
signum function sgn, the greatest integer function [·], and the function p
are discontinuous.
• Its dependence on the unknown function y is nonlocal, since the integral
of function y appears in the argument of the arctan-function.
• Its dependence on x is not monotone, since p attains positive and negative values in a infinite number of disjoint sets of positive measure.
For instance, y ∗ (t) > y∗ (t) for all t ∈ (0, 4], but the difference function t 7→ q(t, y ∗ (t), y ∗ ) − q(t, y∗ (t), y∗ ) = 2p(t) + 7(t + sin(t)) is neither
nonnegative-valued nor Lebesgue integrable on J.
The basic theory of Banach-valued HL integrable functions needed in Chaps.
6 and 7 is presented in Chap. 9. However, readers who are interested in Real
Analysis may well consider the functions to be real-valued. For those readers
who are familiar with Bochner integrability theory, notice that all the theoretical results of Chaps. 6 and 7 where HL integrability is assumed remain valid
if HL integrability is replaced by Bochner integrability. As far as the authors
know, even the so obtained special cases are not presented in any other book.

14

1 Introduction

In Sect. 6.5 we study functional differential equations equipped with a
functional initial condition in an ordered Banach space E. There we need
fixed point results for an increasing mapping G : P → P , where P is a subset
of the Cartesian product of the space L1 (J, E) of Bochner integrable functions
from J := [t0 , t1 ] to E and the space C(J0 , E) of continuous functions from
J0 := [t0 − r, t0 ] to E. The difficulties one is faced with in the treatment of
the considered problems are, first, that only a.e. pointwise weak convergence
in L1 (J, E) is available, and second, monotone and bounded sequences of
the pointwise ordered space C(J0 , E) need not necessarily have supremums
and infimums in C(J0 , E). The following purely order-theoretic fixed point
theorem, which is proved in Chap. 2, is the main tool that will allow us to
overcome the above described difficulties.
Theorem 1.15. Let G be an increasing self-mapping of a partially ordered
set P such that chains of the range G[P ] of G have supremums and infimums
in P , and that the set of these supremums and infimums has a sup-center.
Then G has minimal and maximal fixed points, as well as fixed points that are
increasing with respect to G.
This fixed point theorem will be applied, in particular, in Sects. 6.5 and 7.3
to prove existence and comparison results for solutions of operator equations
in partially ordered sets, integral equations, as well as implicit functional
differential problems in ordered Banach spaces. It is noteworthy that the data
of the considered problems, i.e., operators and functions involved, are allowed
to depend discontinuously on all their arguments. Moreover, we do not suppose
the existence of subsolutions and/or supersolutions in the treatment.
The abstract order-theoretic fixed poind theory developed in Chap. 2 has
been proved to be an extremely powerful tool in dealing with Nash equilibria
for normal form games, which is the subject of Chap. 8.
John Nash invented in [185] an equilibrium concept that now bears his
name. Because of its importance in economics, Nash earned the Nobel Prize
for Economics in 1994. In [185] Nash described his equilibrium concept in
terms of game theory as follows:
“Any N -tuple of strategies, one for each player, may be regarded as a point
in the product space obtained by multiplying the N strategy spaces of the
players. One such N -tuple counters another if the strategy of each player
in countering N -tuple yields the highest possible expectation for its player
against the N − 1 strategies of the other player in the countered N -tuple. A
self-countering N -tuple is called an equilibrium point.”
To convert this description into a mathematical concept, we utilize the
following notations. Let Γ = {S1 , . . . , SN , u1 , . . . , uN } be a finite normal-form
game, where the strategy set Si for player i is finite, and the utility function ui
of player i is real-valued and defined on S = S1 ×· · ·×SN . Using the notations

1 Introduction

15

s−i = (s1 , . . . , si−1 , si+1 , . . . , sN ) and ui (s1 , . . . , sN ) = ui (si , s−i ), strategies
s∗1 , . . . , s∗N form a pure Nash equilibrium for Γ if and only if
ui (s∗i , s∗−i ) ≥ ui (si , s∗−i ) for all si ∈ Si and i = 1, . . . , N.
This definition implies that the strategies of players form a pure Nash equilibrium if and only if no player can improve his/her utility by changing the
strategy when all the other players keep their strategies fixed.
Besides economics, this equilibrium concept has been applied in other social and behavioral sciences, biology, law, politics, etc., cf. [83, 109, 177, 193,
210, 218, 220, 224]. The Nash equilibrium has found so many applications
partly because it can be usefully interpreted in a number of ways (cf. [144]).
For instance, in human interaction (social, economic, or political) the utilities
of players (individuals, firms, or politicians/parties) are mutually dependent
on actions of all players. The Nash equilibrium provides an answer to the
question of what is an optimal action for every player. The following simple
thought experiment describes the usefulness of the Nash equilibrium concept
and its relation to democratic decision procedure.
The traffic board of Sohmu hires a consultant to make a traffic plan for
the only crossroad of town having traffic lights. Traffic should be as safe as
possible. The consultant seeks Nash safety equilibria and finds two: either
every passenger goes toward green light, or, all passengers go toward red light.
He suggests to the board that one of these alternatives should be chosen. The
state council votes on the choice. Every council member votes according to
his/her preferences: Green, Red, or Empty. The result is in Nash equilibrium
with respect to the opinions of the council members.
The above thought experiment implies that the concept of Nash equilibrium
harmonizes with democratic decision making. It also shows that if actions are
in Nash equilibrium, they may give the best result for every participant. It
is not a matter of a zero-sum game where someone loses when someone else
wins.
Nash used in [186] a version of Theorem 1.8 (see [148]) to prove the existence of Nash equilibrium for a finite normal-form game. Because of finiteness
of strategy sets, the application of Kakutani’s fixed point theorem was not
possible without extensions of Si to be homeomorphic with convex sets. Thus
he extended the strategy sets to contain also strategies that are called mixed
strategies. This means that players i are allowed to choose independently randomizations of strategies of Si , that is, each mixed strategy σi is a probability
measure over Si . The values of utilities Ui , i = 1, . . . , N , are then the expected
values:
X
Ui (σ1 , . . . , σN ) =
σ1 ({s1 }) · · · σm ({sN })ui (s1 , . . . , sN ).
(s1 ,...,sN )∈S

According to Nash’s own interpretation stated above, a mixed Nash equilibrium for Γ is a profile of mixed strategies, one for each N players, that has

16

1 Introduction

the property that each player’s strategy maximizes his/her expected utility
against the given strategies of the other players. To put this into a mathematical form, denote σ−i = (σ1 , . . . , σi−1 , σi+1 , . . . , σN ) and Ui (σ1 , . . . , σN ) =
Ui (σi , σ−i ), and let Σi denote the set of all mixed strategies of player i. We
∗
say that mixed strategies σ1∗ , . . . , σN
form a mixed Nash equilibrium for Γ if
∗
∗
) = max Ui (σi , σ−i
) for all i = 1, . . . , N.
Ui (σi∗ , σ−i
σi ∈Σi

As for a variety of areas where the concept of Nash equilibrium is applied, see
[144, 183] and the references therein.
In Chap. 8 we present some recent results dealing with Nash equilibria
for normal-form games. Our study is focused on games with strategic complementarities, which means roughly speaking that the best response of any
player is increasing in actions of the other players. Sections 8.1 and 8.2 are
devoted especially to those readers who are interested only in finite games.
In section 8.1 we prove the existence for the smallest and greatest pure Nash
equilibria of a normal-form game whose strategy spaces Si are finite sets of
real numbers, and the real-valued utility functions ui possess a finite difference property. If the utilities ui (s1 , . . . , sN ) are also increasing (respectively
decreasing) in sj , j 6= i, the utilities of the greatest (respectively the smallest)
pure Nash equilibrium are shown to majorize the utilities of all pure Nash
equilibria. An application to a pricing problem is given.
Our presentation of Sects. 8.2–8.5 has three main purposes.
1. In order to avoid ”throwing die” in the search for Nash equilibria, it
would be desirable that Γ has a pure Nash equilibrium whose utilities majorize
the utilities of all other Nash equilibria for Γ , including mixed Nash equilibria.
In such a case it would be of no benefit to seek possible mixed Nash equilibria.
If Γ is a finite normal-form game, every player who has at least two pure
strategies has uncountably many mixed strategies. On the other hand, the
set of its pure strategies, as well as the ranges of its utilities, are finite for
each player. Thus one can find pure Nash equilibria in concrete situations by
finite methods. In Sect. 8.2 we prove that finite normal-form games, which are
supermodular in the sense defined in [218, p. 178], possess the above described
desirable properties. The proof is constructive and provides a finite algorithm
to determine the most profitable pure Nash equilibrium for Γ . This algorithm
and Maple programming is applied to calculate the most profitable pure Nash
equilibrium and the corresponding utilities for some concrete pricing games.
Proposition 8.61 deals also with finite normal-form games.
2. Theorem 1.9 along with other fixed point theorems presented in Chap.
2 are applied in Sects. 8.3 and 8.4 to derive existence and comparison results
for exremal Nash equilibria of normal-form games in more general settings.
For instance, the result for finite supermodular games proved in Sect. 8.2 is
extended in Sect. 8.4 to the case when the strategy spaces Si are compact
sublattices of complete and separable ordered metric spaces. The easiest case
is when strategy spaces are subsets of R. In fact, it has been shown recently (cf.

1 Introduction

17

[86]) that when the strategies of a supermodular normal-form game are real
numbers, the mixed extension of that game is supermodular, its equilibria form
a non-empty complete lattice, and its extremal equilibria are in pure strategies
when mixed strategies are ordered by first order stochastic dominance. A
problem that arises when the strategies are not in R is described in [86, Chap.
3] as follows:
“When the strategy spaces are multidimensional, the set of mixed strategies
is not a lattice. This implies that we lack the mathematical structure needed
for the theory of complementarities. We need lattice property to make sense
of increasing best responses when they are not real-valued. Multiple best
responses are always present when dealing with mixed equilibria and there
does not seem a simple solution to the requirement that strategy spaces be
lattices.”
In particular, classical fixed point theorems in complete lattices are not applicable, even for finite normal-form games having multidimensional strategy spaces. Moreover, in such cases the desirable comparison results between
pure and mixed strategies cannot be obtained by the methods used, e.g., in
[86, 180, 217, 218, 222, 223]. The results of Theorems 2.20 and 2.21 and their
duals provide tools to overcome the problem caused by the above stated nonlattice property. Our results imply that the smallest and greatest pure Nash
equilibria of supermodular games form lower and upper bounds for all possible mixed Nash equilibria when the set of mixed strategies is ordered by
first-order stochastic dominance. These lower and upper bounds have the important property that they are the smallest and greatest rationalizable strategy profile, as shown in [179]. In particular, if these bounds are equal, then
there is only one pure Nash equilibrium, which is also a unique rationalizable
strategy profile, and no properly mixed Nash equilibria exist. In [218, Sect.
4] the following eight examples of supermodular games are presented: Pricing
game with substitute products, production game with complementary products, multimarket oligopoly, arms race game, trading partner search game,
optimal consumption game with multiple products, facility location game,
and minimum cut game. The first example is studied here more closely. In
this example the greatest Nash equilibrium has also the desirable property
that the utilities of the greatest Nash equilibrium majorize the utilities of all
other Nash equilibria, including mixed Nash equilibria. Concrete examples are
solved by using Maple programming.
3. Another property, which restricts the application of the original result of
Nash, is that the utility functions are assumed to be real-valued. This requires
that differences of values of ui can be estimated by numbers. The results of
Sects. 8.2 and 8.4 are proved in the case when the values of ui are in ordered
vector spaces Ei . Thus we can consider the cases when the values of utilities
are random variables by choosing Ei = L2 (Ωi , R), ordered a.e. pointwise. If
the strategy spaces are finite, the only extra hypothesis is that the ranges of
ui (·, s−i ) are directed upward. As an application the pricing game is extended

18

1 Introduction

to the form where the values of the utility functions are in Rmi . Concrete
examples are solved also in this case.
But in the definition of pure Nash equilibrium a partial ordering of the
values of ui is sufficient. The fixed point theorems presented in Chap. 2 apply
to derive existence results for extremal pure Nash equilibria of normal-form
games when both the strategy spaces Si and ranges of utility functions ui are
partially ordered sets. Such results are presented in Sect. 8.3. We also present
results dealing with monotone comparative statics, i.e., conditions that ensure
the monotone dependence of extremal pure Nash equilibria on a parameter
that belongs to a poset. As for applications, see, e.g., [10, 11, 180, 218]. These
results can be applied to cases where the utilities of different players are
evaluated in different ordinal scales, where all the values of the utility functions
need not even be order-related. Thus the way is open for new applications of
the theory of Nash equilibrium, for instance, in social and behavioral sciences.
In such applications the term ‘utility’ would approach to one of its meanings:
“the greatest happiness of the greatest number.”
A necessary condition for the existence of a Nash equilibrium for Γ is
that the functions ui (·, s−i ) have maximum points. When the ranges of these
functions are in partially ordered sets that are not topologized, the classical
hypotheses, like upper semicontinuity, are not available. Therefore we define
a new concept, called upper closeness, which ensures the existence of required
maximum points. Upper semicontinuity implies upper closeness for real-valued
functions.
In Sect. 8.5 we study the existence of undominated and weakly dominating
strategies of normal-form games when the ranges of the utility functions are
in partially ordered sets. A justification for Sects. 8.2–8.5 is a philosophy of
economic modeling stated in [13]: ”The weakest sufficient conditions for robust
conclusions is particularly important to economists.” Concrete examples are
presented.
The existence of winning strategies for a pursuit and evasion game is
proved in Sect. 8.6. As an introduction to the subject consider a finite pursuit
and evasion game. Game board P is a nonempty subset of R2 , equipped with
coordinatewise partial ordering ≤. Assume that to every position x ∈ P of
player p (pursuer) there corresponds a nonempty subset F(x) ⊆ P of possible
positions y of player q (quarry). The only rule of the game is:
(R) If (xn , yn ) and (xn+1 , yn+1 ) are consecutive positions of a play, then yn ≤
yn+1 whenever xn < xn+1 , and yn+1 ≤ yn whenever xn+1 < xn .
We say that a strategy of p is a winning strategy if the use of it yields capturing, i.e., after a finite number of move pairs p and q are in the same position.
Player p has a winning strategy if the following conditions hold:
S
(i) The set F [P ] = {F(x) : x ∈ P } has a sup-center c ∈ P .

1 Introduction

19

(ii) If x ≤ y in P , then for every z ∈ F(x) for which z ≤ y there exists such
a w ∈ F(y) that z ≤ w, and for every w ∈ F(y) satisfying x ≤ w there
exists such a z ∈ F(x) that z ≤ w.
(iii) Strictly monotone sequences of F[P ] are finite.
The existence of a winning strategy for p can be justified as follows. Player
p starts from x0 = c, and q starts from y0 ∈ F(x0 ). If x0 = y0 , then p wins.
Otherwise, let xn and yn ∈ F(xn ) denote positions of p and q after nth move
pair of a play. If xn 6= yn , then p moves to xn+1 = sup{c, yn } if xn and yn
are unordered, and to xn+1 = yn if xn and yn are ordered. If xn < xn+1 ,
then q must obey rule (R) and choose a position yn+1 of F(xn+1 ) such that
yn ≤ yn+1 , which is possible due to condition (ii). If xn+1 < xn , then similarly
q obeying the rule (R) can choose by condition (ii) yn+1 ∈ F(xn+1 ) so that
yn+1 ≤ yn . Condition (iii) ensures that every play which follows these rules
stops after a finite number of moves to the situation where xm = ym .
The correspondence x 7→ F(x) can be considered also as a set-valued
mapping from P to the set 2P \ ∅ of nonempty subsets of P . Since the final
positions x = xm of p and y = ym of q after a play satisfy x = y ∈ F(x), the
above reasoning shows that F has a fixed point under conditions (i)–(iii).
To see that the pursuit–evasion game and the fixed point problem formulated above are different if one of the conditions (i)–(iii) is violated, choose
P = {a, b}, where a = (0, 1) and b = (1, 0). If F(a) = F(b) = P , then both
a and b are fixed points of F. If x0 is any of the points of P from which p
starts, then q can start from the other point of P , and p cannot capture q. In
this example conditions (ii) and (iii) hold, but (i) is not valid. This lack can
yield also a nonexistence of a fixed point of F even in the single-valued case,
as we see by choosing P as above and defining F(a) = {b} and F(b) = {a}.
Also in this case conditions (ii) and (iii) are valid. The above results hold true
also when P is a partially ordered set (poset), positive and negative directions
being determined by a partial ordering of P .
The finite game introduced above is generalized in Sect. 8.6, where we
study the existence of winning strategies for pursuit and evasion games that
are of ordinal length. The obtained results are then used to study the solvability of equations and inclusions in ordered spaces. Monotonicity hypotheses,
like (ii) above, are weaker than those assumed in Chap. 2.
As for the roots of the methods used in the proofs of Theorems 1.2, 1.5, 1.9,
1.15, and related theorems in Chap. 2, and in Sect. 8.6, we refer to Ernst Zermelo’s letter to David Hilbert, dated September 24, 1904. This letter contains
the first proof that every set can be well-ordered, i.e., every set P has such
a partial ordering that each nonempty subset A of P has the minimum. The
proof in question was published in the same year in Mathematische Annalen
(see [231]). The influence of that proof is described in [94, p.84] as follows:
“The powder keg had been exploded through the match lighted by Zermelo in
his first proof of well-ordering theorem.” To find out what in that proof was
so shocking we give an outline of it. The notations are changed to reveal a re-

20

1 Introduction

cursion principle that is implicitly included in Zermelo’s proof. That principle
forms a cornerstone to proofs of the main existence and comparison results of
this book, including the fixed point results stated above. As for the concepts
related to ordering, see Chap. 2.
Let P be a nonempty set, and let f be a choice function that selects
from the complement P \ U of every proper subset U of P an element f (U )
(= γ(P \ U ), where γ is “eine Belegung” in Zermelo’s proof). We say that a
nonempty subset A of P is an f -set if A has such an order relation < that
the following conditions holds:
(i) (A, <) is well-ordered, and if x ∈ A, then x = f (A<x ),
where A<x = {y ∈ A : y < x}.
Applying a comparison principle for well-ordered sets proved by Georg Cantor
in 1897 (see [36]) one can show that if A = (A, <) and B = (B, ≺) are f -sets
and A 6⊆ B, then B is an initial segment of A, and if x, y ∈ B, then x ≺ y
if and only if x < y. Using these properties it is then elementary to verify
that the union C of f -sets is an f -set, ordered by the union of the orderings
of f -sets.
We have C = P , for otherwise A = C ∪ {f (C)} would be an f -set, ordered
by the union of the ordering of C and {(y, f (C)) : y ∈ C}, contradicting the
fact that C is the union of all f -sets. Thus P is an f -set, and hence well
ordered.
The proof is based on three principles. One of them ensures the existence of a choice function f . After his proof Zermelo mentions that “Die Idee,
unter Berufung auf dieses Prinzip eine beliebige Belegung der Wohlordnung zu
grunde zu legen, verdanke ich Herrn Erhard Schmidt.” This principle, which
is a form of the Axiom of Choice, caused the strongest reactions against Zermelo’s proof, because there exists no constructive method to determine f for
an arbitrary infinite set P . Another principle used in the proof is Cantor’s
comparison principle for well-ordered sets. A third principle is hidden in the
construction of the union C of f -sets: Because C is an f -set, then x ∈ C implies x = f (C <x ). Conversely, if x = f (C <x ), then x ∈ P = C. Consequently,
(A)

x ∈ C ⇐⇒ x = f (C <x ).

In Zermelo’s proof f was a choice function. Recently, this special instance is
generalized to the following mathematical method, called the Chain Generating Recursion Principle (see [112, 133]).
Given any nonempty partially ordered set P = (P, <), a family D of subsets
of P with ∅ ∈ D and a mapping f : D → P , there is exactly one well-ordered
chain C of P such that (A) holds. Moreover, if C ∈ D, then f (C) is not a
strict upper bound of C.
In the proof of this result only elementary properties of set theory are used in
[112, 133]. In particular, neither the Axiom of Choice nor Cantor’s comparison

1 Introduction

21

principle are needed. To get this book more self-contained we give another
proof in the Preliminaries, Chap. 2.
To give a simple example, let D be the family of all finite subsets of the
set P = R of real numbers, and f (U ), U ∈ D, the number of elements of U .
By the Chain Generating Recursion Principle there is exactly one subset C
of R that is well-ordered by the natural ordering of R and satisfies (A). The
elements of C are values of f , so that C ⊆ N0 = {0, 1, . . . }. On the other
hand, N0 is a well-ordered subset of R, and n = f (N<n
0 ), n ∈ N0 . Thus N0 is
an f -set, whence N0 ⊆ C. Consequently, C = N0 , so that (A) generates the
set of natural numbers.
More generally, given (P, <, D, f ), condition (A) can be considered formally as a ‘recursion automate’ that generates exactly one well-ordered set
C. The amount of admissible quadruples (P, <, D, f ) is so big that no set can
accommodate them.
The first elements of C satisfying (A) are
x0 := f (∅), . . . , xn+1 := f ({x0 , . . . , xn }), as long as xn < f ({x0 , . . . , xn }).
(1.19)
If xn+1 = xn for some n, then xn = max C. This property can be used
to derive algorithmic methods that apply to determine exact or approximative solutions for many kinds of concrete discontinuous nonlocal problems, as
well as to calculate pure Nash equilibria and corresponding utilities for finite
normal-form games. The Chain Generating Recursion Principle is applied in
this book to introduce generalized iteration methods, which provide the basis
for the proofs of our main fixed point theorems including Theorems 1.2, 1.5,
1.9, and 1.15. They are applied to prove existence and comparison results for
a number of diverse problems such as, e.g., operator equations and inclusions,
partial differential equations and inclusions, ordinary functional differential
and integral equations in ordered Banach spaces involving singularities, discontinuities, and also non-absolutely integrable functions. Moreover, these abstract fixed point results are shown to be useful and effective tools to prove
existence results for extremal Nash equilibria for normal-form games, and to
study the existence of winning strategies for pursuit and evasion games.

2
Fundamental Order-Theoretic Principles

In this chapter we use the Chain Generating Recursion Principle formulated in
the Introduction to develop generalized iteration methods and to prove existence and comparison results for operator equations and inclusions in partially
ordered sets. Algorithms are designed to solve concrete problems by appropriately constructed Maple programs.

2.1 Recursions and Iterations in Posets
Given a nonempty set P , a relation x < y in P × P is called a partial ordering,
if x < y implies y 6< x, and if x < y and y < z imply x < z. Defining x ≤ y if
and only if x < y or x = y, we say that P = (P, ≤) is a partially ordered set
(poset).
An element b of a poset P is called an upper bound of a subset A of P if
x ≤ b for each x ∈ A. If b ∈ A, we say that b is the greatest element of A,
and denote b = max A. A lower bound of A and the smallest element min A
of A are defined similarly, replacing x ≤ b above by b ≤ x. If the set of all
upper bounds of A has the smallest element, we call it a supremum of A and
denote it by sup A. We say that y is a maximal element of A if y ∈ A, and if
z ∈ A and y ≤ z imply that y = z. An infimum of A, inf A, and a minimal
element of A are defined similarly. A poset P is called a lattice if inf{x, y} and
sup{x, y} exist for all x, y ∈ P . A subset W of P is said to be upward directed
if for each pair x, y ∈ W there is a z ∈ W such that x ≤ z and y ≤ z, and
W is downward directed if for each pair x, y ∈ W there is a w ∈ W such that
w ≤ x and w ≤ y. If W is both upward and downward directed it is called
directed. A set W is said to be a chain if x ≤ y or y ≤ x for all x, y ∈ W . We
say that W is well-ordered if nonempty subsets of W have smallest elements,
and inversely well-ordered if nonempty subsets of W have greatest elements.
In both cases W is a chain.
A basis to our considerations is the following Chain Generating Recursion
Principle (cf. [112, Lemma 1.1], [133, Lemma 1.1.1]).
S. Carl and S. Heikkilä, Fixed Point Theory in Ordered Sets and Applications:
From Differential and Integral Equations to Game Theory,
DOI 10.1007/978-1-4419-7585-0_2, © Springer Science+Business Media, LLC 2011

23

24

2 Fundamental Order-Theoretic Principles

Lemma 2.1. Given a nonempty poset P , a subset D of 2P = {A : A ⊆ P }
with ∅ ∈ D, where ∅ denotes the empty set, and a mapping f : D → P . Then
there is a unique well-ordered chain C in P such that
x ∈ C if and only if x = f (C <x ), where C <x = {y ∈ C : y < x}.

(2.1)

If C ∈ D, then f (C) is not a strict upper bound of C.
Proof: A nonempty subset A of P is called an f -set (with f given in the
lemma, and thus the proof is independent on the Axiom of Choice) if it has
the following properties.
(i) (A, <) is well-ordered, and if x ∈ A, then x = f (A<x ), where A<x = {y ∈
A : y < x}.
For instance, the singleton {f (∅)} is an f -set. These sets possess the following
property:
(a) If A and B are f -sets and A 6⊆ B, then B = A<x for some x ∈ A.
Namely, according to a comparison principle for well-ordered sets (see [36])
there exists such a bijection ϕ : B → A<x for some x ∈ A that ϕ(u) < ϕ(v) if
and only if u < v in B. The set S = {u ∈ B : u 6= ϕ(u)} is empty, for otherwise,
y = min S would exist and B <y = A<ϕ(y) , which yields a contradiction:
y 6= ϕ(y) and y = f (B <y ) = f (A<ϕ(y) ) = ϕ(y). Thus B = ϕ[B] = A<x ,
which proves (a).
Applying (a) it is then elementary to verify that the union C of all f sets is an f -set. Hence, x = f (C <x ) for all x ∈ C. Conversely, if x ∈ P and
x = f (C <x ), then C <x ∪ {x} is an f -set, whence x ∈ C. Thus (2.1) holds for
C.
To prove uniqueness, let B be a well-ordered subset of P for which x ∈
B ⇔ x = f (B <x ). Since B is an f -set, so B ⊆ C. If B 6= C, then B = C <x by
(a). But then f (B <x ) = f (B) = f (C <x ) = x, and x 6∈ B, which contradicts
with x ∈ B ⇔ x = f (B <x ). Thus B = C, which proves the uniqueness of
C. (the well-ordering condition is needed in this proof, since there may exist
other partially ordered sets that satisfy (2.1), cf. [110]).
If f (C) is defined, it cannot be a strict upper bound of C, for otherwise
f (C) 6∈ C and f (C) = f (C <f (C) ), so that C ∪ f (C) would be an f -set, not
contained in C, which is the union of all f -sets. This proves the last assertion
of the lemma.
t
u
As a consequence of Lemma 2.1 we get the following result (cf. [116, Lemma
2]).
Lemma 2.2. Given G : P → P and c ∈ P , there exists a unique well-ordered
chain C = C(G) in P , called a w-o chain of cG-iterations, satisfying
x ∈ C if and only if x = sup{c, G[C <x ]}.

(2.2)

2.1 Recursions and Iterations in Posets

25

Proof: Denote D = {W ⊆ P : W is well-ordered and sup{c, G[W ]} exists}.
Defining f (W ) = sup{c, G[W ]}, W ∈ D, we get a mapping f : D → P , and
(2.1) is reduced to (2.2). Thus the assertion follows from Lemma 2.1.
t
u
A subset W of a chain C is called an initial segment of C if x ∈ W and
y < x imply y ∈ W . The following application of Lemma 2.2 is used in the
sequel.
Lemma 2.3. Denote by G the set of all selections from F : P → 2P \ ∅, i.e.,
G := {G : P → P : G(x) ∈ F(x) for all x ∈ P }.

(2.3)

Given c ∈ P and G ∈ G. Let CG denote the longest initial segment of the
w-o chain C(G) of cG-iterations such that the restriction G|CG of G to CG
is increasing (i.e., G(x) ≤ G(y) whenever x ≤ y in CG ). Define a partial
ordering ≺ on G as follows: Let F, G ∈ G then
(O) F ≺ G if and only if CF is a proper initial segment of CG and G|CF =
F |CF .
Then (G, ) has a maximal element.
Proof: Let C be a chain in G. The definition (O) of ≺ implies that the sets
CF , F ∈ C, form a nested family of well-ordered sets of P . Thus the set
C := ∪{CF : F ∈ C} is well-ordered. Moreover, it follows from (O) that the
functions F |CF , F ∈ C, considered as relations in P × P , are nested. This
ensures that g := ∪{F |CF : F ∈ C} is a function from C to P . Since each
F ∈ C is increasing in CF , then g is increasing, and g(x) ∈ F(x) for each
x ∈ C. Let G be such a selection from F that G|C = g. Then G ∈ G, and G is
increasing on C. If x ∈ C, then x ∈ CF for some F ∈ C. The definitions of C
and the partial ordering ≺ imply that CF is C or its initial segment, whence
CF<x = C <x . Because F |CF = g|CF = G|CF , then
x = sup{c, F [CF<x ]} = sup{c, G[C <x ]}.

(2.4)

This result implies by (2.2) that C is C(G) or its proper initial segment. Since
G is increasing on C, then C is CG or its proper initial segment. Consequently,
G is an upper bound of C in G. This result implies by Zorn’s Lemma that G
has a maximal element.
t
u
Let P = (P, ≤) be a poset. For z, w ∈ P , we denote
[z) = {x ∈ P : z ≤ x}, (w] = {x ∈ P : x ≤ w} and [z, w] = [z) ∩ (w].
A poset X equipped with a topology is called an ordered topological space if
the order intervals [z) and (z] are closed for each z ∈ X. If the topology of
X is induced by a metric, we say that X is an ordered metric space. Next we
define some concepts for set-valued functions.

26

2 Fundamental Order-Theoretic Principles

Definition 2.4. Given posets X and P , we say F : X → 2P \∅ is increasing
upward if x ≤ y in X and z ∈ F(x) imply that [z) ∩ F(y) is nonempty. F
is increasing downward if x ≤ y in X and w ∈ F(y) imply that (w] ∩
F(x) is nonempty. If F is increasing upward and downward, we say that F
is increasing.
Definition 2.5. A nonempty subset A of a subset Y of a poset P is called
order compact upward in Y if for every chain C of Y that has a supremum
in P the intersection ∩{[y) ∩ A : y ∈ C} is nonempty whenever [y) ∩ A is
nonempty for every y ∈ C. If for every chain C of Y that has the infimum in
P the intersection of all the sets (y] ∩ A, y ∈ C is nonempty whenever (y] ∩ A
is nonempty for every y ∈ C, we say that A is order compact downward
in Y . If both these properties hold, we say that A is order compact in Y .
Phrase ‘in Y ’ is omitted if Y = A.
Every poset P is order compact. If a subset A of P has the greatest element (respectively the smallest element), then A is order compact upward
(respectively downward) in any subset of P that contains A. Thus an order
compact set is not necessarily (topologically) compact, not even closed. On
the other hand, every compact subset A of an ordered topological space P is
obviously order compact in every subset of P that contains A.

2.2 Fixed Point Results in Posets
In this subsection we prove existence and comparison results for fixed points
of set-valued and single-valued functions defined in a poset P = (P, ≤).
Definition 2.6. Given a poset P = (P ≤) and a set-valued function F : P →
2P \ ∅, denote Fix(F) = {x ∈ P : x ∈ F(x)}. Every element of Fix(F) is
called a fixed point of F. A fixed point of F is called minimal, maximal,
smallest, or greatest if it is a minimal, maximal, smallest, or greatest element of Fix(F), respectively. For a single-valued function G : P → P replace
Fix(F) by Fix(G) = {x ∈ P : x = G(x)}.
2.2.1 Fixed Points for Set-Valued Functions
Our first proved fixed point result is an application of Lemma 2.1.
Lemma 2.7. Assume that F : P → 2P satisfies the following hypothesis.
(S+ ) The set S+ = {x ∈ P : [x) ∩ F(x) 6= ∅} is nonempty, and conditions: C
is a nonempty well-ordered chain in S+ , G : C → P is increasing and
x ≤ G(x) ∈ F(x) for all x ∈ C, imply that G[C] has an upper bound in
S+ .
Then F has a maximal fixed point, which is also a maximal element of S+ .

2.2 Fixed Point Results in Posets

27

Proof: Denote
D = {W ⊂ S+ : W is well-ordered and has a strict upper bound in S+ }.
Because S+ is nonempty by the hypothesis (S+ ), then ∅ ∈ D. Let f : D → P
be a function that assigns to each W ∈ D an element y = f (W ) ∈ [x) ∩ F(x),
where x is a fixed strict upper bound of W in S+ . Lemma 2.1 ensures the
existence of exactly one well-ordered chain W in P satisfying (2.1). By the
above construction and (2.1) each element y of W belongs to [x)∩F(x), where
x is a fixed strict upper bound of W <y in S+ . It is easy to verify that the set C
of these elements x form a well-ordered chain in S+ ; that the correspondence
x 7→ y defines an increasing mapping G : C → P ; that x ≤ G(x) ∈ F(x) for
all x ∈ C; and that W = G[C]. It then follows from the hypothesis (S+ ) that
W has an upper bound x ∈ S+ , which satisfies x = max W . For otherwise
f (W ) would exist, and as a strict upper bound of W would contradict the
last conclusion of Lemma 2.1. By the same reason x is a maximal element of
S+ .
Since x ∈ S+ , a y ∈ P exists such that x ≤ y ∈ F(x). It then follows
from the hypothesis (S+ ) when C = {x} and G(x) := y that {y} has an upper
bound z in S+ . Because x is a maximal element of S+ , then z = y = x ∈ F(x),
so that x is a fixed point of F. If z is a fixed point of F and x ≤ z, then z ∈ S+ ,
whence x = z. Thus x is a maximal fixed point of F.
t
u
As an application of Lemma 2.7 we obtain the following result.
Proposition 2.8. Assume that F : P → 2P \ ∅ is increasing upward, that the
set S+ = {x ∈ P : [x) ∩ F(x) 6= ∅} is nonempty, that well-ordered chains of
F[S+ ] have supremums in P , and that the values of F at these supremums
are order compact upward in F[S+ ]. Then F has a maximal fixed point, which
is also a maximal element of S+ .
Proof: It suffices to show that the hypothesis (S+ ) of Lemma 2.7 holds.
Assume that C is a well-ordered chain in S+ , that G : C → P is an increasing
mapping, and that x ≤ G(x) ∈ F(x) for all x ∈ C. Then G[C] is a well-ordered
chain in F[S+ ], so that y = sup G[C] exists. Since F is increasing upward, then
[x) ∩ F(y) 6= ∅ for every x ∈ G[C]. Because F(y) is order compact upward in
F[S+ ], then the intersection of the sets [x) ∩ F(y), x ∈ G[C] contains at least
one element w. Thus G[C] has an upper bound w in F(y). Since y = sup G[C],
t
u
then y ≤ w, so that w ∈ [y) ∩ F(y), i.e., y belongs to S+ .
The next result is the dual to Proposition 2.8.
Proposition 2.9. Assume that F : P → 2P \ ∅ is increasing downward, that
the set S− = {x ∈ P : (x]∩F(x) 6= ∅} is nonempty, that inversely well-ordered
chains of F[S− ] have infimums in P , and that values of F at these infimums
are order compact downward in F[S− ]. Then F has a minimal fixed point,
which is also a minimal element of S− .

28

2 Fundamental Order-Theoretic Principles

If the range F[P ] has an upper bound (respectively a lower bound) in
P , it belongs to S− (respectively to S+ ). To derive other conditions under
which the set S− or the set S+ is nonempty, we introduce the following new
concepts.
Definition 2.10. Let A be a nonempty subset of a poset P . The set ocl(A)
of all possible supremums and infimums of chains of A is called the order
closure of A. If A = ocl(A), then A is order closed. We say that a subset
A of a poset P has a sup-center c in P if c ∈ P and sup{c, x} exists in
P for each x ∈ A. If inf{c, x} exists in P for each x ∈ A, we say that c is
an inf-center of A in P . If c has both these properties it is called an order
center of A in P . Phrase “in P ” is omitted if A = P .
If P is an ordered topological space, then the order closure ocl(A) of A
is contained in the topological closure A of A. If c is the greatest element
(respectively the smallest element) of P , then c is an inf-center (respectively
a sup-center) of P , and trivially c is a sup-center (respectively an inf-center).
Therefore, both the greatest and the smallest element of P are order centers.
If P is a lattice, then its every point is an order center of P . If P is a subset
of R2 , ordered coordinatewise, a necessary and sufficient condition for a point
c = (c1 , c2 ) of P to be a sup-center of a subset A of P in P is that whenever
a point y = (y1 , y2 ) of A and c are unordered, then (y1 , c2 ) ∈ P if y2 < c2 and
(c1 , y2 ) ∈ P if y1 < c1 .
The following result is an application of Lemma 2.3.
Proposition 2.11. Assume that F : P → 2P \ ∅ is increasing upward and
that its values are order compact upward in F[P ]. If well-ordered chains of
F[P ] have supremums, and if the set of these supremums has a sup-center c
in P , then the set S− = {x ∈ P : (x] ∩ F(x) 6= ∅} is nonempty.
Proof: Let G be defined by (2.3), and let the partial ordering ≺ be defined
by (O). In view of Lemma 2.3, (G, ) has a maximal element G. Let C(G)
be the w-o chain of cG-iterations, and let C = CG be the longest initial
segment of C(G) on which G is increasing. Thus C is well-ordered and G is
an increasing selection from F|C. Since G[C] is a well-ordered chain in F[P ],
then w = sup G[C] exists. Moreover, x = sup{c, w} exists in P by the choice
of c, and it is easy to see that x = sup{c, G[C]}. This result and (2.2) imply
that for each x ∈ C,
x = sup{c, G[C <x ]} ≤ sup{c, G[C]} = x.
This proves that x is an upper bound of C, and also of G[C]. Moreover, F
is increasing upward and F(x) is order compact upward in F[P ]. Thus the
proof of Proposition 2.8 implies that G[C] has an upper bound z in F(x), and
w = sup G[C] ≤ z. To show that x = max C, assume on the contrary that x
is a strict upper bound of C. Let F be a selection from F whose restriction

2.2 Fixed Point Results in Posets

29

to C ∪ {x} is G|C ∪ {(x, z)}. Since G is increasing on C and F (x) = G(x) ≤
w ≤ z = F (x) for each x ∈ C, then F is increasing on C ∪ {x}. Moreover,
x = sup{c, G[C]} = sup{c, F [C]} = sup{c, F [{y ∈ C ∪ {x} : y < x}]},
whence C ∪ {x} is a subset of the longest initial segment CF of the w-o chain
of cF -iterations where F is increasing. Thus C = CG is a proper subset of
CF , and F |CG = F |CF . By (O) this means that G ≺ F , which, however,
is impossible because G is a maximal element of (G, ). Consequently, x =
max C. Since G is increasing on C, then x = sup{c, G[C]} = sup{c, G(x)}. In
particular, F(x) 3 G(x) ≤ x, whence G(x) belongs to the set (x] ∩ F(x). u
t
As a consequence of Propositions 2.8, 2.9, and 2.11 we obtain the following
fixed point result.
Theorem 2.12. Assume that F : P → 2P \∅ is increasing, and that its values
are order compact in F[P ]. If chains of F[P ] have supremums and infimums,
and if ocl(F[P ]) has a sup-center or an inf-center in P , then F has minimal
and maximal fixed points.
Proof: We shall give the proof in the case when ocl(F[P ]) has a sup-center
in P , as the proof in the case of an inf-center is similar. The hypotheses
of Proposition 2.11 are then valid, whence there exists a x ∈ P such that
(x] ∩ F(x) 6= ∅. Thus the hypotheses of Proposition 2.9 hold, whence F has
by Proposition 2.9 a minimal fixed point x− . In particular [x− ) ∩ F(x− ) 6= ∅.
The hypotheses of Proposition 2.8 are then valid, whence we can conclude
that F has also a maximal fixed point.
t
u
Example 2.13. Assume that Rm is ordered as follows. For all x = (x1 , . . . , xm ),
y = (y1 , . . . , ym ) ∈ Rm ,
x ≤ y if and only if xi ≤ yi , i = 1, . . . , j, and xi ≥ yi , i = j + 1, . . . , m,
(2.5)
m
where j ∈ {0, . . . , m}. Show that if F : Rm → 2R \ ∅ is increasing, and its
values are closed subsets of Rm , and if F[Rm ] is contained in the set
p
BR
(c) = {(x1 , . . . , xm ) ∈ Rm :

m
X

|xi − ci |p ≤ Rp }, p, R ∈ (0, ∞),

i=1

where c = (c1 , . . . , cm ) ∈ Rm , then F has minimal and maximal fixed points.
p
(c) be given. Since | max{ci , xi } − ci | ≤
Solution: Let x = (x1 , . . . , xm ) ∈ BR
|xi − ci | and | min{ci , xi } − ci | ≤ |xi − ci | for each i = 1, . . . , m, it follows
p
p
that sup{c, x} and inf{c, x} belong to BR
(c) for all x ∈ BR
(c). Moreover,
p
every BR (c) is a closed and bounded subset of Rm , whence its monotone
p
sequences converge in BR
(c) with respect to the Euclidean metric of Rm .

30

2 Fundamental Order-Theoretic Principles

These results, Lemma 2.31 and the given hypotheses imply that chains of
F[Rm ] have supremums and infimums, that c is an order center of ocl(F[Rm ]),
and that the values of F are compact. Thus the hypotheses of Theorem 2.12
are satisfied, whence we conclude that F has minimal and maximal fixed
points.
t
u
2.2.2 Fixed Points for Single-Valued Functions
Next we present existence and comparison results for fixed points of singlevalued functions. The following auxiliary result is a consequence of Proposition
2.11 and its proof. We note that the Axiom of Choice is not needed in the
proof.
Proposition 2.14. Assume that G : P → P is increasing, that ocl(G[P ]) has
a sup-center c in P , and that sup G[C] exists whenever C is a nonempty wellordered chain in P . If C is the w-o chain of cG-iterations, then x = max C
exists, x = sup{c, G(x)} = sup{c, G[C]}, and
x = min{z ∈ P : sup{c, G(z)} ≤ z}.

(2.6)

Moreover, x is the smallest solution of the equation x = sup{c, G(x)}, and it
is increasing with respect to G.
Proof: The mapping F := G : P → 2P \ ∅ is single-valued. Because G
is increasing, then C in Lemma 2.3 is the w-o chain of cG-iterations. The
hypotheses given for G imply also that c is a sup-center of ocl(F[P ]) in P ,
and that sup G[C] exists. Since G is single-valued, the values of F are order
compact in F[P ]. Thus the proof of Proposition 2.11 implies that x = max C
exists, and x = sup{c, G(x)} = sup{c, G[C]}. To prove (2.6), let z ∈ P satisfy
sup{c, G(z)} ≤ z. Then c = min C ≤ z. If x ∈ C and sup{c, G(y)} ≤ z for each
y ∈ C <x , then x = sup{c, G[C <x ]} ≤ z. This implies by transfinite induction
that x ≤ z for each x ∈ C. In particular x = max C ≤ z. From this result and
the fact that x = sup{c, G(x)} we infer that x = x is the smallest solution of
the equation x = sup{c, G(x)}, and that (2.6) holds. The last assertion is an
immediate consequence of (2.6).
t
u
The results presented in the next proposition are dual to those of Lemma
2.2 and Proposition 2.14.
Proposition 2.15. Given G : P → P and c ∈ P , there exists exactly one
inversely well-ordered chain D in P , called an inversely well-ordered (i.w-o)
chain of cG- iterations, satisfying
x ∈ D if and only if x = inf{c, G[{y ∈ D : x < y}]}.

(2.7)

Assume that G is increasing, that ocl(G[P ]) has an inf-center c in P , and
that inf G[D] exists whenever D is a nonempty inversely well-ordered chain

2.2 Fixed Point Results in Posets

31

in P . If D is the i.w-o chain of cG-iterations, then x = min D exists, x =
inf{c, G(x)} = inf{c, G[D]}, and
x = max{z ∈ P : z ≤ inf{c, G(z)}}.

(2.8)

Moreover, x is the greatest solution of the equation x = inf{c, G(x)}, and it
is increasing with respect to G.
Our first fixed point result is a consequence of Propositions 2.14 and 2.15.
Theorem 2.16. Let P be a poset and let G : P → P be an increasing mapping.
(a) If x ≤ G(x), and if sup G[C] exists whenever C is a well-ordered chain in
[x) and x ≤ G(x) for every x ∈ C, then the w-o chain C of xG-iterations
has a maximum x∗ and
x∗ = max C = sup G[C] = min{y ∈ [x) : G(y) ≤ y}.

(2.9)

Moreover, x∗ is the smallest fixed point of G in [x), and x∗ is increasing
with respect to G.
(b) If G(x) ≤ x, and if inf G[C] exists whenever C is an inversely well-ordered
chain (x] and G(x) ≤ x for every x ∈ C, then the i.w-o chain D of xGiterations has a minimum x∗ and
x∗ = min D = inf G[D] = max{y ∈ (x] : y ≤ G(y)}.

(2.10)

Moreover, x∗ is the greatest fixed point of G in (x], and x∗ is increasing
with respect to G.
Proof: Ad (a) Since G is increasing and x ≤ G(x), then G[[x)] ⊂ [x). It is
also easy to verify that x ≤ G(x) for every element x of the w-o chain C of
xG-iterations. Thus the conclusions of (a) are immediate consequences of the
conclusion of Proposition 2.14 when c = x and G is replaced by its restriction
to [x).
Ad (b) The proof of (b) is dual to that of (a).
t
u
As an application of Propositions 2.14 and 2.15 and Theorem 2.16 we get
the following fixed point results.
Theorem 2.17. Assume that G : P → P is increasing, and that sup G[C]
and inf G[C] exist whenever C is a chain in P .
(a) If ocl(G[P ]) has a sup-center or an inf-center in P , then G has minimal
and maximal fixed points.
(b) If ocl(G[P ]) has a sup-center c in P , then G has the greatest fixed point x∗
in (x], where x is the smallest solution of the equation x = sup{c, G(x)}.
Both x and x∗ are increasing with respect to G.

32

2 Fundamental Order-Theoretic Principles

(c) If c is an inf-center of ocl(G[P ]) in P , then G has the smallest fixed point
x∗ in [x), where x is the greatest solution of the equation x = inf{c, G(x)}.
Both x and x∗ are increasing with respect to G.
Theorem 2.16, Proposition 2.8, its proof, and Proposition 2.9 imply the
following results.
Proposition 2.18. Assume that G : P → P is increasing.
(a) If the set S+ = {x ∈ P : x ≤ G(x)} is nonempty, and if sup G[C] exists
whenever C is a well-ordered chain in S+ , then G has a maximal fixed
point. Moreover, G has for every x ∈ S+ the smallest fixed point in [x),
and it is increasing with respect to G.
(b) If the set S− = {x ∈ P : G(x) ≤ x} is nonempty, and if inf G[D] exists
whenever D is an inversely well-ordered chain in S− , then G has a minimal
fixed point. Moreover, G has for every x ∈ S− the greatest fixed point in
(x], and it is increasing with respect to G.
Example 2.19. Let R+ be the set of nonnegative reals, and let Rm be ordered
coordinatewise. Assume that G : Rm → Rm
+ is increasing and maps increasing
sequences of the set S+ = {x ∈ Rm
+ : x ≤ G(x)} to bounded sequences. Show
that G has the smallest fixed point and a maximal fixed point.
Solution: The origin is a lower bound of G[Rm ]. Let C be a well-ordered
chain in S+ . Since G is increasing, then G[C] is a well-ordered chain in Rm
+.
If (yn ) is an increasing sequence in G[C], and xn = min{x ∈ C : G(x) = yn },
then the sequence (xn ) is increasing and yn = G(xn ) for every n. Thus (yn ) is
bounded by a hypothesis, and hence converges with respect to the Euclidean
metric of Rm . This result implies by Lemma 2.31 that sup G[C] exists. Thus
the assertions follow from Proposition 2.18.
t
u
2.2.3 Comparison and Existence Results
In the next application of Theorem 2.16, fixed points of a set-valued function
are bounded from above by a fixed point of a single-valued function.
Theorem 2.20. Given a poset X = (X, ≤), a subset P of X and x ∈ P ,
assume that a function G : P → P and a set-valued function F : X → 2X
have the following properties.
(Ha) G is increasing, G(x) ≤ x, and inf G[D] exists in P whenever D is an
inversely well-ordered chain in (x].
(Hb) x is an upper bound of F[X] = ∪x∈X F(x), and if x ≤ p in X and p ∈ P ,
then G(p) is an upper bound of F(x).
Then G has the greatest fixed point x∗ in (x], and if x is any fixed point of F,
then x ≤ x∗ .

2.2 Fixed Point Results in Posets

33

Proof: Let D be the i.w-o chain of xG-iterations. Since D is inversely wellordered, then x∗ = inf G[D] exists and belongs to P by hypothesis (Ha).
Moreover, x∗ is the greatest fixed point of G in (x] by Theorem 2.16 (b).
To prove that x∗ is an upper bound for fixed points of F, assume on the
contrary an existence of a point x of X such that x ∈ F(x) and x 6≤ x∗ . Since
x∗ = min D and D is inversely well-ordered, there exists the greatest element
p of D such that x 6≤ p. Because x ∈ F(x), then x ≤ x = max D by (Hb),
whence p < x. If q ∈ D and p < q, then x ≤ q, so that x ≤ G(q) by (Hb). Thus
x is a lower bound of the set G[{q ∈ D : p < q}]. Since x is an upper bound
of this set, then p is by (2.7) with c = x the infimum of G[{q ∈ D : p < q}].
But then x ≤ p, which contradicts with the choice of p. Consequently, x ≤ x∗
for each fixed point x of F.
t
u
Using the result of Theorem 2.20 we prove the following existence and
comparison result for greatest fixed points of set-valued functions.
Theorem 2.21. Given a nonempty subset P of X and F : X → 2X , assume
that
(H0) F[X] has an upper bound x in P .
(H1) If p ∈ P , then max F(p) exists, belongs to P , and is an upper bound of
F[X ∩ (p]].
(H2) Inversely well-ordered chains of the set {max F(p) : p ∈ P } have infimums in P .
Then F has a greatest fixed point, and it belongs to P . Assume moreover,
that F̂ : X → 2X is another set-valued function that satisfies the following
condition.
(H3) For each x ∈ X and y ∈ F̂(x) there exists a z ∈ F(x) such that y ≤ z.
Then the greatest fixed point of F is an upper bound for all the fixed points of
F̂.
Proof: The hypothesis (H1) ensures that defining
G(p) := max F(p),

p ∈ P,

(2.11)

we obtain an increasing mapping G : P → P . Moreover, G(x) ≤ x is by (H0),
and the hypothesis (H2) means that every inversely well-ordered chain of G[P ]
has an infimum in P . Thus the hypothesis (Ha) of Theorem 2.20 holds. The
hypothesis (H1) and the definition (2.11) of G imply that also the hypothesis
(Hb) of Theorem 2.20 is valid. Thus G has by Theorem 2.20 the greatest fixed
point x∗ . Because x∗ = G(x∗ ) = max F(x∗ ) ∈ F(x∗ ), then x∗ is also a fixed
point of F, which is by Theorem 2.20 an upper bound all fixed points of F.
Consequently, x∗ is the greatest fixed point of F, and x∗ = max F(x∗ ) ∈ P
by (H1).
To prove the last assertion, let F̂ : X → 2X be such a set-valued function
that (H3) holds. The hypotheses (H0) and (H3) imply that x is an upper

34

2 Fundamental Order-Theoretic Principles

bound of F̂[X]. Moreover, if x ≤ p in X and p ∈ P , then for each y ∈ F̂(x)
there is by (H3) a z ∈ F(x) such that y ≤ z, and z ≤ max F(p) = G(p)
by (H1) and (2.11). Thus the hypotheses of Theorem 2.20 hold when F is
replaced by F̂, whence x ≤ x∗ for each fixed point x of F̂.
t
u
Remark 2.22. Applying Theorem 2.16 (a) we obtain obvious duals to Theorems 2.20 and 2.21.
2.2.4 Algorithmic Methods
Let P be a poset, and let G : P → P be increasing. The first elements of the
w-o chain C of cG-iterations are: x0 = c, xn+1 = sup{c, Gxn }, n = 0, 1, . . . ,
as long as xn+1 exists and xn < xn+1 . Assuming that strictly monotone
sequences of G[P ] are finite, then C is a finite strictly increasing sequence
(xn )m
n=0 . If sup{c, x} exists for every x ∈ G[P ], then x = sup{c, G[C]} =
max C = xm is the smallest solution of the equation x = sup{c, G(x)} by
Proposition 2.14. In particular, Gx ≤ x. If G(x) < x, then first elements of
the i.w-o chain D of xG-iterations of x are y0 = x = xm , yj+1 = Gyj , as
long as yj+1 < yj . Since strictly monotone sequences of G[P ] are finite, D is
a finite strictly decreasing sequence (yj )kj=0 , and x∗ = inf G[D] = yk is the
greatest fixed point of G in (x] by Theorem 2.16.
The above reasoning and its dual imply the following results.
Corollary 2.23. Conclusions of Theorem 2.17 hold if G : P → P is increasing and strictly monotone sequences of G[P ] are finite, and if sup{c, x} and
inf{c, x} exist for every x ∈ G[P ]. Moreover, x∗ is the last element of the
finite sequence determined by the following algorithm:
(i) x0 = c. For n from 0 while x