\documentclass{TEMA}

\usepackage[english]{babel}    % para texto em Inglês

\usepackage[latin1]{inputenc}   % para acentuação em Português
%\input{P-margin.inf}

\usepackage{amssymb}

\newtheorem{example}{Example}
\renewcommand{\theexample}{\arabic{section}.\arabic{example}}

\begin{document}

%********************************************************
\title
    {Saddle Point and Second Order Optimality in Nondifferentiable Nonlinear Abstract Multiobjective Optimization}

\author
    {L.B. DOS SANTOS%
     \thanks{lucelina@ufpr.br; partially supported by the Spanish Ministry of Education and Science (MEC) - Grant MTM2010-15383, and by the National Council for Scientific and Technological Development (CNPq-Brazil) - Grant 476043/2009-3.}\,,
     Departamento de Matemática - Universidade Federal do Paraná -
     Setor de Ciências Exatas - Centro Politécnico - CP 019081, Jd. das Américas,
	CEP 81531-990 -  Curitiba - Paraná - Brasil
     \\ \\
     M.A. ROJAS-MEDAR%
     \thanks{marko@ueubiobio.cl; partially supported by the Spanish Ministry of Education and Science
(MEC) - Grant MTM2010-15383 and by the National Fund for Scientific and Technological
Development (Fondecyt-Chile) - Project 1080628.}\,,
     Grupo de Matemática Aplicada - Departamento de Ciencias B\'asicas - Facultad de Ciencias - Universidad del Bío-Bío -
     Campus Fernando May - Casilla 447 - Chill\'an - Chile
     \\ \\
     V.A. DE OLIVEIRA%
     \thanks{antunes@ibilce.unesp.br; supported by FAPESP - Grant 2011/01977-2.}\,,
     Instituto de Biociências, Letras e Ciências Exatas, UNESP - Univ. Estadual Paulista, Câmpus de São José do Rio Preto, Depto. de Ciências de Computação e Estatística, R. Cristóvão Colombo, 2265, 15054-000, Jd. Nazareth, São José do Rio Preto - SP
     }

\criartitulo

\runningheads {Santos, Rojas-Medar, de Oliveira}{Saddle Point and Second Order Optimality}

\begin{abstract}
{\bf Abstract}.
This article deals with a vector optimization problem with cone constraints in a Banach space setting. By making use of a real-valued Lagrangian and the concept of generalized subconvex-like functions, weakly efficient solutions are characterized through saddle point type conditions. The results, jointly with  the notion of generalized Hessian (introduced in [Cominetti, R., Correa, R.: A generalized second-order derivative in nonsmooth optimization. SIAM J. Control Optim. \textbf{28}, 789--809 (1990)]), are applied to achieve second order necessary and sufficient optimality conditions (without requiring twice differentiability for the objective and constraining functions) for the particular case when the functionals involved are defined on a general Banach space into finite dimensional ones.

{\bf Keywords}. Multiobjetive optimization, abstract optimization problems, nonlinear programming, saddle point conditions, generalized second order conditions, generalized convexity.
\end{abstract}

%********************************************************
\newsec{Introduction and Formulation of the Problem}

In many situations, practical or theoretical, finite dimensional spaces are not the most suitable ones in order to model or study a given problem. Alike, oftentimes, scalar objective programming is not the most appropriate scenario. Therefore the development of optimality conditions for vectorial abstract programming problems is of great importance.

The role of nonsmooth analysis is of notable importance in the optimization theory. This is due at least to the following reasons. First, in practice, differentiability assumptions may be too restrictive. Second, as pointed by Cominetti and Correa \cite{Cominetti e Correa}, there are many usual techniques commonly employed in optimization that generate ``nonsmoothness'' even when the problems are differentiable. This arises, for example, in duality theory, sensitivity and stability analysis, decomposition techniques, penalty methods, among others.

With respect to necessary optimality conditions without differentiability, one can resort to those of Fritz John or Kuhn-Tucker type, which are obtained under various generalized derivatives concepts, or to saddle point conditions, where no differentiability assumption is required. Still on necessary conditions, we should mention the second order ones, that can also be obtained through generalized (second order) derivatives. Now, on sufficient conditions, there are those based on convexity or generalized convexity and second order type conditions. Both can be addressed in nondifferentiable frameworks.

In the last years, an extended differentiability theory has been developed through various concepts of generalized differentiability and first order optimality conditions to the problem of scalar optimization have been established (see Clarke \cite{Clarke} and Rockafellar \cite{Rockafellar}).

Also, a significant theory of generalized second order differentiability has been developed (see, for example, Aubin and Ekeland \cite{Aubin e Ekeland}, Chaney \cite{Chaney} and Hiriart-Urruty \cite{Hiriart-Urruty}). In particular, Cominetti and Correa introduced in \cite{Cominetti e Correa} the notions of second order directional derivative and generalized Hessian and gave some second order optimality conditions for an abstract scalar minimization problem.

We now cite some few works concerning the topics aforementioned. Multiplier rules of Fritz John and Kuhn-Tucker types were studied, for example, in Bellaassali and Jourani \cite{Bellaassali}, Da Cunha and Polak \cite{dacunha}, Jahn \cite{jahn} and Dos Santos et al. \cite{DosSantos}. Saddle point conditions were investigated, for instance, in Bigi \cite{Bigi} and Chen et al. \cite{chen}. Bigi characterized saddle points assuming convex data and Chen et al. used a distinct type of generalized convexity. Second order conditions were explored, more recently, in Gfrerer \cite{Gfrerer} and Taa \cite{Taa}. In \cite{Gfrerer}, the results were obtained by making use of Hadamard derivatives. In \cite{Taa} abstract problems are considered, but under twice differentiability.

The reader who is interested in a more comprehensive bibliography review on these issues can consult the articles just quoted, which provide fine lists of references.

The aim of this paper is to contribute to the development of the optimality conditions theory of nondifferentiable nonlinear abstract multiobjective optimization.  At first, we will consider a vector optimization problem which can be posed as
$$
\begin{array}{ll}
\mbox{minimize} & f(x) \\
\mbox{subject to} & g(x)\in -K, \\
& x\in S,%
\end{array}%
\eqno{\textrm{(P)}}
$$
where $f:S \subseteq E\rightarrow F$ and $g: S \subseteq E\rightarrow G$ are given (not necessarily differentiable) functions, $S$ is a nonempty subset of $E$ and $E,F$ and $G$ are Banach spaces. The spaces $F$ and $G$ are ordered by closed convex cones $Q\subset F$ and $K\subset G$. Also, we assume that $Q$ has nonempty interior.

We denote by $\mathbb{F}$ the feasible set of (P), in the way that
$$
\mathbb{F}:=\{x\in S:g(x)\in -K\}.
$$

We will consider the so called weakly efficient solutions of (P). We recall that $\bar{x}\in \mathbb{F}$ is said to be a \textsl{weakly efficient solution} (respectively, \textsl{local weakly efficient solution}) of (P) if there does not
exist $x$ feasible for (P) such that $f(x)-f(\bar{x})\in -\mathrm{int}\,Q$ (respectively, if there exists a neighborhood $V$ of $\bar{x}$ such that there does not exist $x\in V\cap \mathbb{F}$ such that $f(x)-f(\bar{x})\in -\mathrm{int}\,Q)$.

Following Osuna-Gómez et al. \cite{Osuna-Gomez1}, we give a definition of saddle points which has the feature of being based on solving scalar problems and not vector ones, as usual. We, then, show that every point that satisfies our definition is a weakly efficient solution. The converse is also obtained, but under a generalized convexity assumption and when (P) satisfies a constraint qualification. Subsequently, we will apply these results to the (finite-dimensional) particular case when $Q=\mathbb{R}_{+}^{p}$ and $K=\mathbb{R}_{+}^{m}$:
$$
\begin{array}{ll}
\mbox{minimize} & f(x):=(f_{1}(x),...,f_{p}(x)) \\
\mbox{subject to} & g_{i}(x)\leq 0,\mbox{ }i=1,...,m, \\
& x \in S,%
\end{array}%
\eqno{\textrm{(PF)}}
$$
where $f_{j},g_{i} : S \subseteq X \rightarrow \mathbb{R}$, $j \in J:=\{1,...,p\}$, $i \in I:=\{1,...,m\}$, are continuous and Gâteaux differentiable functions and $S$ is a nonempty open subset of a Banach space $X$. We obtain second order conditions for the nonsmooth finite-dimensional problem (PF) in terms of second order directional derivatives (see Cominetti and Correa \cite{Cominetti e Correa}).

This work is divided in three more sections. In the next section, we recall some results on generalized subconvex-like functions and an alternative theorem; also we recall some generalized directional derivative and Hessian properties, introduced by Cominetti and Correa in \cite{Cominetti e Correa}. In Section \ref{saddle} we establish saddle point type theorems for the vectorial optimization problem (P) and, finally in Section \ref{second}, we use these results to obtain second order conditions for problem (PF).

%********************************************************
\newsec{Preliminaries}                                %**
%********************************************************

This section is devoted to present some definitions and auxiliary results, which will be useful in the next sections. In the sequel a definition and a technical lemma concerning dual cones are stated. Then we have two subsections, being the first one about the notion of generalized subconvex-like functions and a Gordan type theorem of the alternative for this sort of functions. In the second one, the concept of second order generalized derivatives is defined and some related results are given.

Let $X$ be a locally convex topological vector space. $X^{*}$ denotes the (topological) dual of $X$ and $\langle \cdot , \cdot \rangle $ the canonical bilinear (duality) form between $X$ and $X^{*}$.

\begin{defTEMAi}
The \textsl{dual cone} (or \textsl{polar cone}) of a set $Q\subset X$ is defined as the convex cone
\[
Q^{*}:=\{f\in X^{*}:\langle f,x\rangle \geq 0~\forall~x\in Q\}.
\]
\end{defTEMAi}

\begin{lemmaTEMA} \label{Lemma13}
\label{craven} Let $F$ be a Banach space and $Q\subset F$ a closed convex cone. Then
\[
\langle f,x\rangle >0 ~\forall~ f\in Q^{*}\setminus \{0\},~\forall~ x\in \mathrm{int}\,Q.
\]
\end{lemmaTEMA}

The proof can be found in Craven \cite{Craven}.

%*****************************************************************************************
\subsection{Generalized subconvex-like functions and a Gordan type alternative theorem}%**
%*****************************************************************************************

Convexity and generalized convexity are very important concepts in optimization theory. One reason for this importance is that for these classes of functions it is possible to establish alternative theorems and consequently to obtain necessary and/or sufficient optimality conditions. The generalized convexity notion that we will use here is the \textsl{generalized subconvex-like functions}, which was introduced by Xinmin Yang in \cite{Xinming Yang}, where the author showed that these functions satisfy a Gordan type alternative theorem. He also showed that the generalized subconvex-like class of functions comprise subconvex-like, convex-like and convex class of functions. Thus the generalized subconvex-like is a large class of functions which satisfy a Gordan type alternative theorem.
%
\begin{defTEMAi}
Let $E$ and $F$ be normed spaces, $S_{0}$ a nonempty subset of $E$, $Q\subset F$ a convex set with nonempty interior, and $f:S_{0}\subset E\rightarrow F$. We say that $f$ is a \textsl{\ generalized subconvex-like function} if there exists $u\in \mathrm{int}\,Q$ such that for each $\alpha \in (0,1)$ and arbitrary $x_{1},x_{2}\in S_{0}$ and $\varepsilon >0$, there exist $x_{3}\in S_{0}$ and $\rho >0$ such that
\[
\varepsilon u+\alpha f(x_{1})+(1-\alpha )f(x_{2})-\rho f(x_{3})\in Q.
\]
\end{defTEMAi}

The class of the generalized subconvex-like functions satisfies the following alternative theorem (see \cite{Xinming Yang}, p. 128-130):

\begin{thmTEMA}[Generalized Alternative Theorem] \label{teorema de alternativa}
Let $E$ and $F$ be two Banach spaces, $Q\subset F$ a convex cone with nonempty interior and $S\subset E$ nonempty. Assume that $f:S\rightarrow F$ is generalized subconvex-like. Then, exactly one of the following statements is consistent:
%
\begin{enumerate}
\item[a)] There exists $x\in S$ such that $-f(x)\in \mathrm{int}\,Q;$
%
\item[b)] There exists $s^{*}\in Q^{*}\setminus \{0\}$ such that $\langle s^{*},f(x) \rangle \geq 0~\forall~ x\in S$.
\end{enumerate}
\end{thmTEMA}
%

%******************************************************************************
\subsection{Second order generalized derivative and the generalized Hessian}%**
%******************************************************************************

In this subsection we recall some results concerning the generalized second order derivative and the generalized Hessian. We start giving its definitions. Then, certain important classes of funtions are introduced. The section is closed with two propositions, where topological properties of the generalized derivative and a second order Taylor type expansion are exhibited. For more details, see Cominetti and Correa \cite{Cominetti e Correa}.

In the following, $X$ is a locally convex topological vector space.

\begin{defTEMAi}
The generalized second order directional derivative of a function $f : X \rightarrow \mathbb{R}$ at $x \in X$ in the directions $(u,v) \in X \times X$ is defined by
%
\[
f^{\circ\circ}(x;u,v):=\limsup_{{y \rightarrow  x \atop t,s \downarrow 0}} \frac{f(y+su+tv)-f(y+su)-f(y+tv)+f(y)}{st}
\]
%
and the generalized Hessian of $f$ at $x$ is the multifunction  $\partial^{2}f(x):X\rightrightarrows X^{*}$ given by
\[
\partial ^{2}f(x)(u):=\{x^{*}\in X^{*}:\langle x^{*},v\rangle \leq f^{\circ\circ}(x;u,v)~\forall~ v\in X\}.
\]
\end{defTEMAi}
%
In order to obtain continuity properties for generalized Hessian it is necessary to define the following classes of functions:
\begin{defTEMAi}
A function $f:X\rightarrow \mathbb{R}$ is \textsl{twice} $C$-\textsl{differentiable} in $x$ if $f^{\circ\circ}(x;u,\cdot )$ is lower semicontinuous (l.s.c.), for each $u\in X$.
\end{defTEMAi}
%
\begin{defTEMAi}
We say that $f:X\rightarrow \mathbb{R}$ is \textsl{twice locally Lipschitz} at $x$ if for each $v\in X$ there exist a neighborhood $V$ of $x$ and a neighborhood $U$ of zero such that the set $f^{\circ\circ}(V;U,v)$ is bounded in $\mathbb{R}$.

If the boundedness of $f^{\circ\circ}(V;U,v)$ is uniform in $v$, that is, if there exist neighborhoods $V$ of $x$ and $U$ of zero such that $f^{\circ\circ}(V; U, U)$ is bounded in $\mathbb{R}$, then we say that $f$ is \textsl{twice uniformly locally Lipschitzian} at $x$.
\end{defTEMAi}
%
In \cite{Cominetti e Correa} it is proved that if $f : X \rightarrow \mathbb{R}$ is twice locally Lipschitz at $x$, then $f$ is twice $C$-differentiable at every point of $V$, where $V$ is chosen as in the last definition.
%
\begin{defTEMAi}
Let $Y,Z$ be topological vector spaces and $A:Y\rightrightarrows Z$ be a multifunction. We say that $A$ is \textsl{locally compact} at $y\in Y$ if there exists a neighborhood $V$ of $y$ such that $A(V)=\bigcup\limits_{y^{\prime
}\in V}A(y^{\prime })$ is relatively compact.

We say that $A$ is \textsl{closed} at $y$ if for each net $y_{\alpha }\rightarrow y$ and $z_{\alpha}\rightarrow z$ with $z_{\alpha }\in A(y_{\alpha })~\forall~\alpha$, we have $z\in A(y).$

If $A$ is locally compact and closed at $y,$ we say that $A$ is \textsl{upper semicontinuous} (u.s.c.) at $y.$
\end{defTEMAi}
%
An important class of twice uniformly locally Lipschitzian functions is defined below.
%
\begin{defTEMAi}
We say that $f:X\rightarrow \mathbb{R}$ is a $C^{1,1}-$function if it is G\^ateaux-differentiable and the (G\^ateaux) derivative $\nabla f$ is locally Lipschitz.
\end{defTEMAi}
%
\begin{propTEMAi}[Cominetti and Correa \cite{Cominetti e Correa}] \label{proposition10}
Assume that $f:X\rightarrow \mathbb{R}$ is twice locally Lipschitz at $x$. Then, for each $u\in X$, the following assertions are satisfied:
%
\begin{enumerate}
\item[(a)] $\partial ^{2}f(\cdot )(u)$ is locally w*-compact and $\partial^{2}f(x)(u)$ is w*-compact;
%
\item[(b)] $f^{\circ\circ}(\cdot ;\cdot ,v)$ is u.s.c., for each $v\in X$ and $\partial^{2}f(\cdot )(\cdot )$ is closed;
%
\item[(c)] If $f$ is $C^{1,1}$, then $\partial ^{2}f(\cdot )(\cdot ) $ is locally w*-compact.
\end{enumerate}
\end{propTEMAi}

The following proposition is a version of the second order Taylor expansion for twice $C$-differentiable functions.

\begin{propTEMAi}[Cominetti and Correa \cite{Cominetti e Correa}] \label{proposition11}
Let $f:X\rightarrow \mathbb{R}$ be a continuously Gateaux-differentiable function and twice $C$-differentiable in the closed segment $[x,y]\subset X.$ Then, there exists $\xi \ $ in the open segment $\left] x,y\right[ $ such that
\[
f(y)\in f(x)+\langle \nabla f(x),y-x\rangle +\frac{1}{2}\overline{\langle
\partial ^{2}f(\xi )(y-x),y-x\rangle }
\]%
and the closure is unnecessary when $f$ is $C^{1,1}.$
\end{propTEMAi}

%******************************************************
\newsec{Saddle Point Type Conditions} \label{saddle}%**
%******************************************************

Following the guidelines of Kuhn-Tucker and Fritz-John, we characterize weakly efficient solutions of problem (P) in terms of saddle point type conditions. Here we give a saddle point definition to the problem of vector optimization, which is based on solving scalar problems, instead of vector ones like the most existing definitions in the literature. In other words, such a definition has the property of not involving the resolution of a multiobjective problem. Furthermore, our definition generalizes the one introduced in Osuna-Gómez et al. \cite{Osuna-Gomez1} for the corresponding case, when (P) is stated in a finite-dimensional setting.
%
\begin{defTEMAi}
We say that $(\bar{x},\bar{r},\bar{v})\in E\times F^{*}\times G^*$ is a \textsl{multiple saddle point} for the problem (P) if
\begin{equation}
\bar{r}\circ f(\bar{x})+v\circ g(\bar{x})\leq \bar{r}\circ f(\bar{x})+\bar{v}%
\circ g(\bar{x})\leq \bar{r}\circ f(x)+\bar{v}\circ g(x)  \label{40}
\end{equation}
$\forall~ v\in K^{*},~\forall~x\in S$ and if $(\bar{r},\bar{v})\in Q^{*}\times
K^{*}$, $\bar{r}\neq 0$.
\end{defTEMAi}

As in the classical case, if $(\bar{x},\bar{r},\bar{v})$ is a multiple saddle point, then $\bar{x}$ is a weakly efficient solution of (P). Before we prove this assertion we need the following auxiliary result:

\begin{lemmaTEMA} \label{Lemma14}
Let $E,F$ be two Banach spaces. Assume that $F$ is ordered by the convex cone $Q\subset F$ with nonempty interior and let $f:\Gamma \subset E\rightarrow F$. Consider the vectorial optimization problem:
%
$$
\begin{array}{ll}
\mbox{minimize} & f(x) \\
\mbox{subject to} & x\in \Gamma.
\end{array}%
\eqno{(\widehat{\mathrm{P}})}
$$
%
If there exists $\bar{r}\in Q^{\ast }\setminus \{0\}$ such that $\bar{x}\in \Gamma $ is a solution of
$$
\begin{array}{ll}
\mbox{minimize} & \bar{r}\circ f(x) \\
\mbox{subject to} & x\in \Gamma,
\end{array}%
\eqno(\widehat{\mathrm{P}}(\bar{r}))
$$
%
then $\bar{x}$ is a weakly efficient solution of $(\widehat{\mathrm{P}})$.
\end{lemmaTEMA}
%
\begin{proof}
Suppose that $\bar{x}\in \Gamma $ is not a weakly efficient solution of ($\widehat{\mathrm{P}})$. In this case, there exists $x\in \Gamma $ such that $f(x)<_{Q}f(\bar{x})$, that is, $f(\bar{x})-f(x)\in \mathrm{int}\,Q$. Since $\bar{r}\in Q^{\ast }\setminus \{0\}$, by Lemma \ref{Lemma13}, we have $\bar{r}(f(\bar{x})-f(x))>0$ and, by linearity of $\bar{r}$, we have
\[
\bar{r}\circ f(\bar{x})>\bar{r}\circ f(x),
\]%
which is a contradiction.
\end{proof}

\begin{thmTEMA}
If $(\bar{x},\bar{r},\bar{v})$ is a multiple saddle point, then $\bar{x}$ is
a weakly efficient solution of (P).
\end{thmTEMA}
%
\begin{proof}
By Lemma \ref{Lemma14}, it is enough to show that $\bar{x}$ is a solution of the problem
%
$$
\begin{array}{ll}
\mbox{minimize} & \bar{r}\circ f(x) \\
\mbox{subject to} & x\in \mathbb{F},%
\end{array}%
$$
%
where $\mathbb{F}:=\{x\in S:-g(x)\in K\}$. Since $(\bar{x},\bar{r},\bar{v})$ is a multiple saddle point, we have
%
$$
\bar{r} \circ f(\bar{x})+v \circ g(\bar{x}) \leq \bar{r} \circ f(\bar{x})+\bar{v} \circ g(\bar{x}) \leq \bar{r} \circ f(x) + \bar{v} \circ g(x) ~\forall~v\in K^{*},~\forall~x\in S.
$$
%
In particular, setting $v=0$ , we obtain $\bar{v} \circ g(\bar{x})\geq 0$ and, therefore, $\bar{v}\circ g(\bar{x})=0$. Then,
\[
\bar{r}\circ f(\bar{x})\leq \bar{r}\circ f(x)~\forall~x\in \mathbb{F}
\]%
and, thus, $\bar{x}$ is a solution of $(\widehat{\mathrm{P}}(\bar{r}))$.
\end{proof}

The converse of the above result is also true under certain generalized convexity hypotheses (in our case, generalized subconvex-likeness) and regularity on the constraints of problem. We use a Slater type constraint qualification.

\begin{defTEMAi}[Slater type Constraint Qualification]
We say that the constraint qualification (CQ) is satisfied if there exists $\tilde{x}\in \mathbb{F}$ such that $g(\tilde{x})\in -\mathrm{int}\,K$.
\end{defTEMAi}

\begin{thmTEMA} \label{teorema16}
Assume that in the problem (P) the function $(f-f(\bar{x}),g)$ is generalized subconvex-like (with respect to the cone $Q\times K\subset F\times G)$. If $\bar{x}\in \mathbb{F}$ is a weakly efficient solution of (P) and (CQ) is verified, then there exist $\bar{r},\bar{v}$ such that $(\bar{x},\bar{r},\bar{v})$ is a multiple saddle point.
\end{thmTEMA}
%
\begin{proof}
The proof follows from Theorem \ref{teorema de alternativa}. In fact, if $\bar{x}$ is a weakly efficient solution of (P), there does not exist a solution $x \in E$ for the following system
\[
\left\{
\begin{array}{rcl}
f(x)-f(\bar{x}) & \in & -\mathrm{int}\,Q, \\
g(x) & \in & -K.%
\end{array}
\right.
\]
%
Since the function $(f-f(\bar{x}),g)$ is generalized subconvex-like, by Theorem \ref{teorema de alternativa}, there exists $(\bar{r},\bar{v})\in Q^{*}\times K^{*}\setminus \{(0,0)\}$ such that
\[
\bar{r} \circ (f(x)-f(\bar{x})) + \bar{v}\circ g(x)\geq 0 ~\forall~ x\in S
\]
that is,
\begin{equation}
\bar{r}\circ f(x)+\bar{v}\circ g(x)\geq \bar{r}\circ f(\bar{x})~\forall~x \in S.  \label{35}
\end{equation}
%
If $x=\bar{x}$ in (\ref{35}), then $\bar{v}\circ g(\bar{x})\geq 0$ and, therefore $\bar{v}\circ g(\bar{x})=0$, since $g(\bar{x})\in -K$ and $\bar{v}\in K^{*}$. Furthermore, $v\circ g(\bar{x})\leq 0\ \forall~ v\in K^{*}$. Thus, we have
\begin{eqnarray*}
\bar{r} \circ f(\bar{x}) + v \circ g(\bar{x}) & \leq & \bar{r} \circ f(\bar{x}) + 0 \\
                                              &  =   & \bar{r} \circ f(\bar{x}) + \bar{v} \circ g(\bar{x}) \\
                                              &  =   & \bar{r} \circ f(\bar{x}) \\
                                              & \leq & \bar{r} \circ f(x) + \bar{v} \circ g(x)
\end{eqnarray*}
for all $v \in K^*$ and $x \in S$.

Now, we show that $\bar{r}\neq 0$. From condition (CQ), there exists $\tilde{x} \in S,\ g(\tilde{x}) \in -\mathrm{int}\,K$. Taking $x=\tilde{x}$ in (\ref{35}), we obtain
\begin{equation} \label{*}
\bar{r}\circ f(\bar{x})\leq \bar{r}\circ f(\tilde{x})+\bar{v}\circ g(\tilde{x}).
\end{equation}
%
By contradiction, assume that $\bar{r}=0$. Then, $\bar{v}\neq 0$ and as $g(\tilde{x}) \in -\mathrm{int}\,K$, it follows from Lemma \ref{Lemma13} that $\bar{v} \circ g(\tilde{x})<0$. On the other hand, with $\bar{r}=0$ in (\ref{*}) we get the opposite inequality, so that we have a contradiction. Therefore, $\bar{r}\neq 0$ and hence $(\bar{x},\bar{r},\bar{v})$ is a multiple saddle point.
\end{proof}

%*******************************************************
\newsec{Second Order Conditions} \label{second}      %**
%*******************************************************

Here two relevant results regarding second order optimality conditions for (PF) are proposed. Necessity and sufficiency are tackled, as applications of the so studied notions and results. It is worth mentioning that such conditions are established without demanding twice differentiability (in the classical sense).

We consider the following vectorial optimization problem:
%
$$
\begin{array}{ll}
\mbox{minimize} & f(x):=(f_{1}(x),...,f_{p}(x)) \\
\mbox{subject to} & g_{i}(x)\leq 0,\mbox{ }i=1,...,m, \\
& x \in S,
\end{array}%
\eqno{\mathrm{(PF)}}
$$%
where $X$ is a Banach space and $f_{j},g_{i}: S \subseteq X \rightarrow \mathbb{R},\ j=1,...,p,\ i=1,...,m,$ are continuous and G\^ateaux differentiable functions and $S$ is a nonempty open subset of $X$.

We prove second order conditions for weak efficiency in (PF) through the notions of directional derivative, generalized Hessian (Cominetti and Correa \cite{Cominetti e Correa}) and the saddle point conditions studied in the last section.

We consider the Lagrangian function
\[
L_{r,v}(x):=\sum_{j=1}^{p}r_{j}f_{j}(x)+\sum_{i=1}^{m}v_{i}g_{i}(x),
\]%
where $r\in \mathbb{R}_{+}^{p},\ v\in \mathbb{R}_{+}^{m}$ and $x\in X.$

It is well known (see Da Cunha and Polak \cite{dacunha} or Jahn \cite{jahn}) that if $\bar{x}$ is a weakly efficient solution of (PF) and a regularity condition holds, then there exist $\bar{r} \in \mathbb{R}_{+}^{p} \setminus \{ 0 \}$ and $\bar{v} \in \mathbb{R}_{+}^{m}$ such that
\begin{eqnarray}
&& \sum_{j=1}^{p}\bar{r}_{j}\nabla f_{j}(\bar{x})+\sum_{i=1}^{m}\bar{v}_{i}\nabla g_{i}(\bar{x})=0, \label{41} \\
%
&& \bar{v}_{i}g_{i}(\bar{x})=0,\ i=1,\ldots,m. \label{42}
\end{eqnarray}%
In this case, $(\bar{r},\bar{v})$ is said to be a pair of \textsl{multipliers}. Here we give a proof of this result assuming that the Slater type constraint qualification and the generalized subconvex-likeness of the functionals are satisfied.
%
\begin{thmTEMA}
Let $\bar{x}$ be a weakly efficient solution for (PF). We assume that the function $(f-f(\bar{x}),g)$ is generalized subconvex-like (with respect to the cone $\mathbb{R}_{+}^{p}\mathbb{\times R}_{+}^{m})$ and that (PF) satisfies the Slater constraint qualification. Then, there exists a pair of multipliers $(\bar{r},\bar{v})$ satisfying (\ref{41})-(\ref{42}).
\end{thmTEMA}
%
\begin{proof}
From Theorem \ref{teorema16}, there exists $(\bar{r},\bar{v})\in \mathbb{R}_{+}^{p} \mathbb{\times R}_{+}^{m}$ such that $(\bar{x},\bar{r},\bar{v})$ is a multiple saddle point. In particular, the following inequality holds true:
\[
\sum_{j=1}^{p}\bar{r}_{j}f_{j}(\bar{x})+\sum_{i=1}^{m}\bar{v}_{i}g_{i}(\bar{x}) \leq \sum_{j=1}^{p}\bar{r}_{j}f_{j}(x)+\sum_{i=1}^{m}\bar{v}_{i}g_{i}(x) ~\forall~ x \in S.
\]
%
Since $f_{j},g_{i}$ are G\^ateaux-differentiable, the inequality above implies that
\[
\sum_{j=1}^{p}\bar{r}_{j}\nabla f_{j}(\bar{x})+\sum_{i=1}^{m}\bar{v}_{i}\nabla g_{i}(\bar{x})=0.
\]
%
Furthermore, as we know from the proof of Theorem \ref{teorema16}, when $(\bar{x},\bar{r},\bar{v})$ is a multiple saddle point we have
\[
\sum_{i=1}^{m}\bar{v}_{i}g_{i}(\bar{x})=0.
\]%
\end{proof}
%
\begin{thmTEMA}[Second order necessary conditions]
Assume that $\bar{x}$ is a weakly efficient solution of (PF). If $(f-f(\bar{x}),g)$ is a generalized subconvex-like function and (PF) satisfies the Slater constraint
qualification, then
\begin{enumerate}
\item[(i)] there exists $(\bar{r},\bar{v})$ such that $(\bar{x},\bar{r},\bar{v})$ is a multiple saddle point;
%
\item[(ii)] the following inequality is verified:
\[
L_{\bar{r},\bar{v}}^{\circ\circ}(\bar{x};u,u)\ge 0 ~\forall~ u \in S.
\]
\end{enumerate}
\end{thmTEMA}
%
\begin{proof}
(i) It follows directly from Theorem \ref{teorema16}.

\noindent(ii) It is obvious that
\[
L_{\bar{r},\bar{v}}^{\circ\circ}(\bar{x};u,u)\geq \limsup\limits_{t\downarrow 0}\frac{1}{t} \langle \nabla L_{\bar{r},\bar{v}}(\bar{x}+tu),u\rangle ~\forall~u\in X.
\]
%
Let $u \in S$. By the saddle point conditions,
\[
\sum_{j=1}^{p}\bar{r}_{j}f_{j}(\bar{x})+\sum_{i=1}^{m}\bar{v}_{i}g_{i}(\bar{x}) \leq \sum_{j=1}^{p}\bar{r}_{j}f_{j}(\bar{x}+tu)+\sum_{i=1}^{m}\bar{v}_{i}g_{i}(\bar{x}+tu)
\]
for all $t \in (0,t_0]$, where $t_0 > 0$ is such that $\bar{x}+t_0u \in S$. Hence
\[
L_{\bar{r},\bar{v}}(\bar{x}+tu) \geq L_{\bar{r},\bar{v}}(\bar{x}) ~\forall~ t \in (0,t_0].
\]
%
Fixed $t \in (0,t_0]$, by the Mean Value Theorem, there exists $\tilde{t} \in (0,t)$ such that
\[
t \, \langle \nabla L_{\bar{r},\bar{v}}(\bar{x}+\tilde{t}u) , u \rangle \geq 0
\]
and, consequently,
\[
\limsup\limits_{\tilde{t} \downarrow 0}\frac{1}{\tilde{t}} \langle \nabla L_{\bar{r},\bar{v}}(\bar{x}+\tilde{t}u) , u \rangle \geq 0.
\]
%
Thus, $L_{\bar{r},\bar{v}}^{\circ\circ}(\bar{x};u,u) \geq 0$ for all $u \in S.$
\end{proof}

\begin{defTEMAi}
Let $Q\subset X$. The \textsl{Bouligand's tangent cone} to $Q$ at $x\in Q$ is defined as
$$
B(Q,x):=\{u \in X : \exists ~ t_{\alpha } \downarrow 0,\ \exists ~ u_{\alpha} \rightarrow u \mbox{ such that } x + t_{\alpha} u_{\alpha} \in Q\}.
$$
\end{defTEMAi}

\begin{thmTEMA}[Second order sufficient conditions] \label{SufSecOrderCond}
Assume that in problem (PF) the functions $f_{j},g_{i} : \mathbb{R}^{n} \rightarrow \mathbb{R}$ are $C^{1,1}-$functions, for all $j=1,\ldots,p$ and $i=1,\ldots,m$.  Then, a sufficient condition for a feasible point $\bar{x}$ to be a local weakly efficient solution of (PF) is that there exists a pair of multipliers $(\bar{r},\bar{v})\in \mathbb{R}_{+}^{p} \setminus \{0\} \times \mathbb{R}_{+}^{m}$ such that
\[
-L_{\bar{r},\bar{v}}^{\circ\circ}(\bar{x};u,-u)>0~\forall~u\in B(\mathbb{F},\bar{x})\setminus \{0\}.
\]
\end{thmTEMA}
%
\begin{proof}
Suppose that $\bar{x}$ is not a local weakly efficient solution for (PF). Then, there exists a sequence $(x_{k}) \subset \mathbb{F} \setminus \{ \bar{x} \}$ such that $x_{k} \rightarrow \bar{x}$ and
\begin{equation} \label{ineq}
f_{j}(x_{k}) < f_{j}(\bar{x}),\ j=1,...,p,~\forall~ k\in \mathbb{N}.
\end{equation}
Setting
\[
u_{k}:=\frac{x_{k}-\bar{x}}{\left\| x_{k}-\bar{x} \right\| }
\]
we see (taking subsequences if necessary) that $u_{k}\rightarrow u$, for some $u \in X$. Then clearly $u \in B(\mathbb{F},\bar{x}) \setminus \{0\}$. Put
\[
a_{k} := \frac{2}{\left\| x_{k}-x\right\| ^{2}}(L_{\bar{r},\bar{v}}(x_{k})-L_{\bar{r},\bar{v}}(\bar{x})).
\]
From (\ref{ineq}), the feasibility of $x_{k}$ and the fact that $(\bar{r},\bar{v})$ is a pair of multipliers, it follows that
\[
L_{\bar{r},\bar{v}}(x_{k})-L_{\bar{r},\bar{v}}(\bar{x})=\sum_{j=1}^{p}\bar{r}_{j}(f_{j}(x_{k})-f_{j}(\bar{x}))+\sum_{i=1}^{m}\bar{v}_{i}g_{i}(x_{k})<0,
\]
so that $a_{k}<0 ~\forall~k$.
%
By Proposition \ref{proposition11}, there exists $\xi _{k}\in\ ]x_{k},\bar{x}[$ such that $a_{k}\in \langle \partial ^{2}L_{\bar{r},\bar{v}}(\xi_{k})(u_{k}),u_{k}\rangle ~\forall~k$. Hence, there exists $x_{k}^{\ast }\in \partial
^{2}L_{\bar{r},\bar{v}}(\xi_{k})(u_{k})$ such that $a_{k}=\langle x_{k}^{\ast},u_{k}\rangle < 0 ~\forall~k$. Furthermore, $\xi_k \rightarrow \bar{x}$.
%
By Proposition \ref{proposition10}-(c) we can assume, without loss of generality, that $x_{k}^{\ast }\rightarrow x^{\ast }\in \partial ^{2}L_{\bar{r},\bar{v}}(\bar{x})(u)$. Therefore we obtain
$$
\langle x^{\ast} , u \rangle = \lim \langle x^{\ast}_k , u_k \rangle \leq 0.
$$
In this way we have
$$
-L_{\bar{r},\bar{v}}^{\circ\circ}(\bar{x};u,-u) \leq -\langle x^{\ast} , -u \rangle \leq 0
$$
with $u \in B(\mathbb{F},\bar{x}) \setminus \{0\}$, which contradicts the hypothesis.
\end{proof}

We now present a very simple example illustrating Theorem \ref{SufSecOrderCond}.

\begin{example}
Consider the problem 
$$
\begin{array}{ll}
\mbox{minimize} & f(x):=(f_{1}(x),f_{2}(x)) = (|x|^{3/2},|x-1|^{3/2}) \\
\mbox{subject to} & x \in \mathbb{R}.
\end{array}%
$$%

Observe that $f_{1}^{\prime}$ and $f_{2}^{\prime}$ are not differentiable functions (in the classical sense).

Let $\bar{x}=0$ and $\hat{x}=1$. Then it is easily verified that $L^{\prime}_{\bar{r}}(\bar{x}) = 0$ for $\bar{r} = (1,0)$ and $L^{\prime}_{\hat{r}}(\hat{x}) = 0$ for $\hat{r} = (0,1)$, so that $\bar{r}$ and $\hat{r}$ are multipliers for $\bar{x}$ and $\hat{x}$, respectively. We also have that $-L^{\circ\circ}_{r}(x;u,-u) > 0$ for all $u \in \mathbb{R}\setminus\{0\}$, for $(r,x)=(\bar{r},\bar{x})$ and $(r,x)=(\hat{r},\hat{x})$. Thus $\bar{x}$ and $\hat{x}$ are weakly efficient solutions of this problem.
\end{example}

We close this paper with some few words on possible applications of generalized second order optimality conditions.

In Huang and Yang \cite{Huang} the authors present some nonlinear penalty methods for a constrained multiobjective optimization problem. The last result can be used, for example, in the study and development of these kind of methods. It is well known that penalty functions may be nonsmooth. Besides, even when a smoothing approach is performed the resulting function may not be twice differentiable.

The examination of the Hessian of the penalty function is important in choosing effective algorithms (see Nocedal and Wright \cite{Nocedal} for the mono-objective case). The efficiency of penalty methods relies (not only) on the conditioning of the Hessian matrix.

In Bazaraa et al. \cite{Bazaraa} it can be seen that second order sufficient conditions are assumed on proving that the augmented Lagrangian penalty function can be classified as an exact penalty function (for scalar optimization). Then, Theorem \ref{SufSecOrderCond} can be employed in the development of such a method for multiobjective problems with $C^{1,1}$ data.

Another application of sufficient second order optimality conditions is in sensivity analysis. See Luenberger and Ye \cite{Luenberger} for the mono-objective case.

These are topics for future work.

%**********************
%\newsec{Conclusions}%**
%%**********************
%
%In this work we introduce a notion of saddle point for vector optimization problems between Banach spaces, generalizing the notion introduced earlier by Osuna-Gómez et al. \cite{Osuna-Gomez1}, where is studied a multiobjective optimization problem in finite dimensions. The saddle point definition given here is based on a scalar Lagrangian, which is a special feature since we are dealing with vectorial problems. We give a characterization of weakly efficient solutions for the mentioned problem by means of this kind of saddle points. Furthermore, we obtain second order necessary and sufficent optimality conditions for a certain class of multiobjective problems, which were established without resorting to the classical assumption of twice differentiability of the data. Actually, a generalized second order derivative is used. This concept of differentiability can be seen as a natural second order extension to the Clarke \cite{Clarke} first order generalized derivatives.

%********************************************************
\begin{abstract}
{\bf Resumo}. O artigo trata de um problema de otimização vetorial entre espaços de Banach com restrições envolvendo cones. Usando-se uma lagrangiana que toma valores escalares e o conceito de funções subconvexas generalizadas, soluções fracamente eficientes são caracterizadas por condições do tipo ponto de sela. Os resultados, em conjunto com a noção de Hessian generalizada (introduzida em [R. Cominetti, R. Correa, A generalized second-order derivative in nonsmooth optimization, {\em SIAM J. Control Optim.}, \textbf{28} (1990), 789--809]), são aplicados para se obter condições necessárias e suficientes de segunda ordem para o caso particular em que as funcionais envolvidas são definidas em um espaço de Banach geral mas com valores em espaços de dimensão finita (sem exigir que as funções objetivo e de restrições sejam duas vezes diferenciáveis).
\end{abstract}
%********************************************************

\begin{thebibliography}{99}

\bibitem{Aubin e Ekeland} J.P. Aubin, I. Ekeland, ``Applied Nonlinear Analysis'', John Wiley \& Sons, New York, 1984.

\bibitem{Bazaraa} M.S. Bazaraa, H.D. Sherali and C.M. Shetty, ``Nonlinear Programming: Theory and Algorithms'', Wiley-Interscience, New Jersey, 2006.

\bibitem{Bellaassali} S. Bellaassali, A. Jourani, Lagrange multipliers for multiobjective programs with a general preference, {\em Set-Valued Anal}, {\bf 16} (2008), 229--243.

\bibitem{Bigi} G. Bigi, Saddlepoint optimality criteria in vector optimization, in ``Optimization in Economics, Finance and Industry'' (A. Guerraggio et al., eds.), pp. 85-102, Datanova, Milan, 2002.

\bibitem{Chaney} R.W. Chaney, Second order directional derivatives for nonsmooth functions, {\em J. Math. Anal. Appl.}, \textbf{128} (1987), 495--511.

\bibitem{chen} S.L. Chen, N.J. Huang, M.M. Wong, B-Semipreinvex functions and vector optimization problems in Banach spaces, {\em Taiwanese J. Math.}, {\bf 11} (2007), 595--609.

\bibitem{Clarke} F.H. Clarke, ``Optimization and Nonsmooth Analysis'', John Wiley \& Sons, New York, 1983.

\bibitem{Cominetti e Correa} R. Cominetti, R. Correa, A generalized second-order derivative in nonsmooth optimization, {\em SIAM J. Control Optim.}, \textbf{28} (1990), 789--809.

\bibitem{Craven} B.D. Craven, ``Control and Optimization'', Chapman \& Hall, London, 1995.

\bibitem{dacunha} N.O. Da Cunha, E. Polak, Constrained minimization under vector-valued criteria in finite dimensional spaces, {\em J.  Math. Anal. Appl.}, \textbf{19} (1967), 103--124.

\bibitem{Gfrerer} H. Gfrerer, Second-order optimality conditions for scalar and vector optimization problems in Banach spaces, {\em SIAM J. Control Optim.}, {\bf 45} (2006), 972--997.

\bibitem{Hiriart-Urruty} J.B. Hiriart-Urruty, Approximating a second-order directional derivative for nonsmooth convex functions, {\em SIAM J. Control Optim.}, \textbf{22} (1984), 43--56.

\bibitem{Huang} X.X. Huang and X.Q. Yang, Asymptotic analysis of a class of nonlinear penalty methods for constrained multiobjective optimization, {\em Nonlinear Analysis}, \textbf{47} (2001), 5573--5584.

\bibitem{jahn} J. Jahn, ``Mathematical Vector Optimization in Partially Ordered Linear Spaces'', Peter Lang, Frankfurt, 1986.

\bibitem{Luenberger} D.G. Luenberger and Y. Ye, ``Linear and Nonlinear Programmming'', Springer, New York, 2008.

\bibitem{Nocedal} J. Nocedal and S.J. Wright, ``Numerical Optimization'', Springer, New York, 2006.

\bibitem{Osuna-Gomez1} R. Osuna-Gómez, A. Rufián-Lizana, P. Ruiz-Canales, Duality in nondifferentiable vector programming, {\em J. Math. Anal. Appl.}, \textbf{259} (2001), 462--475.

\bibitem{Rockafellar} R.T. Rockafellar, ``Convex Analysis'', Princeton, New Jersey, 1970.

\bibitem{DosSantos} L.B. Dos Santos, A.J.V. Brandão, R. Osuna-Gómez, M.A. Rojas-Medar, Necessary and sufficient conditions for weak efficiency in non-smooth vectorial optimization problems, {\em Optimization}, {\bf 58} (2009), 981--993.

\bibitem{Taa} A. Taa, Second-order conditions for nonsmooth multiobjective optimization problems with inclusion constraints, {\em J. Glob. Optim.}, {\bf 50} (2011), 271--291.

\bibitem{Xinming Yang} X. Yang, Alternative theorems and optimality conditions with weakened convexity, {\em Opsearch}, \textbf{29} (1992), 125-135.
\end{thebibliography}

\end{document}
\newpage
$ \  \  $  \thispagestyle{myheadings}  \markboth{      }{   }
