% !TEX encoding = IsoLatin
%% Antes de processar este arquivo LaTeX (LaTeX2e) deve
%% verificar que o arquivo TEMA.cls esta na mesma
%% pasta. O arquivo TEMA.cls pode ser obtido do
%% endereco http://tema.sbmac.org.br.

\documentclass{TEMA}

\usepackage[english]{babel}    % para texto em Inglês

\usepackage[latin1]{inputenc}   % para acentuação em Português IsoLatin1
%\usepackage[utf8]{inputenc}     % para acentuação em Português UTF-8

% Atenção: O presente arquivo usa a codificação IsoLatin1. Para
% utilizar a codificação UTF-8, converter o arquivo usando seu
% programa favorito, então troque o argumento do inputenc
% adequadamente.

\usepackage[dvips]{graphics}
\usepackage{subfigure}
\usepackage{graphicx}
\usepackage{epsfig}
\usepackage{hyperref}
\usepackage{framed}
\usepackage{psfrag}
\usepackage{tikz}
\usepackage{algorithm}

\newcommand{\B}{{\tt\symbol{92}}}
\newcommand{\til}{{\tt\symbol{126}}}
\newcommand{\chap}{{\tt\symbol{94}}}
\newcommand{\agud}{{\tt\symbol{13}}}
\newcommand{\crav}{{\tt\symbol{18}}}

\begin{document}

%********************************************************
\title{A Comparison Among Simple Algorithms for Linear Programming
}


\criartitulo





\begin{abstract}

{\bf Abstract}. This paper presents a comparison between a family of simple algorithms for linear programming and the optimal pair adjustment algorithm. The optimal pair adjustment algorithm improvements the convergence of von Neumann's algorithm which is very attractive because of its simplicity. However, it is not practical to solve linear programming problems to optimality, since its convergence is slow. The family of simple algorithms results from the generalization of the optimal pair adjustment algorithm, including a parameter on the number of chosen columns instead of just a pair of them. Such generalization preserves the simple algorithms nice features. Significant improvements over the optimal pair adjustment algorithm were demonstrated through numerical experiments on a set of linear programming problems.

{\bf Keywords}. Linear programming, von Neumann's algorithm, Simple algorithms.

\end{abstract}


\newpage

\section{Introduction}

The von Neumann algorithm was reported by Dantzig in the early 1990s \cite{D1,D2}, and it was later studied by Epelman and Freund \cite{EP2}, and Beck and Teboulle \cite{BT2004}. Some of the advantages presented by this method are its low computational cost per iteration, which is dominated by the matrix-vector multiplication, in addition to its ability to exploit the data sparsity from the original problem and the usually fast initial advance. Epelman and Freund \cite{EP2} refer to this algorithm as "elementary", since each iteration involves only simple computations; therefore, it is unsophisticated, especially when compared with the modern interior point algorithms.

In \cite{G2}, three algorithms were proposed to overcome some convergence difficulties from von Neumann's method: the optimal pair adjustment algorithm, the weight reduction algorithm, and projection algorithm. The optimal pair adjustment algorithm (OPAA) provides the best results among them.  This algorithm inherits the best properties from von Neumann's algorithm. Although OPAA has a faster convergence when compared to von Neumann's algorithm, its convergence is also considered slow, making it impractical for solving linear optimization problems.

This work presents a comparison between a family of simple algorithms for linear programming and the optimal pair adjustment algorithm. This family originated from the generalization of the idea presented by Gon\c calves, Storer and Gondzio in \cite{G2} to develop the OPAA. Hence, the optimal adjustment algorithm for {\it p} coordinates was developed. Indeed, for different values of {\it p}, a different algorithm is defined, in which {\it p} is limited by the order of the problem, thus resulting in a family of algorithms. This family of simple algorithms maintains the ability to exploit the data sparsity from the original problem and a fast initial convergence. Significant improvements over OPAA are demonstrated through numerical experiments on a set of linear programming problems.

The paper is organized as follows. Section $2$ contains a description of von Neumann's algorithm. Section $3$ presents both the weight reduction algorithm and the OPAA. Section $4$ discusses the family of simple algorithms, theoretical properties of convergence of the optimal adjustment algorithm for $p$ coordinates, and a sufficient condition for it to present better iterations than the iterations of von Neumann's algorithm. Section $5$ describes the computational experiments comparing the family with the OPAA. The conclusions and perspectives for future work are presented in the last section.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\section{The von Neumann's Algorithm}


The von Neumann algorithm for solving linear programming problems was first proposed by Dantzig in the early 1990s in \cite{D1,D2}. Such an algorithm actually solves the equivalent problem described below.

Consider the following set of linear constraints and the the search for a feasible solution for:

\begin{equation}\label{a1}
        \begin{array}{l}
                Px = 0, \\
                e^{t}x = 1,     x \geq  0, \\
        \end{array}%
\end{equation}

\noindent where $P \in \Re^{m \times n}$ and $||P_j|| = 1$ for $j=1,\ldots,n$ (the columns have norm one), $x \in \Re^{n}$, $e \in \Re^{n}$ is the vector with all ones.

Geometrically, the columns $P_{j}$ are points on the $m$-dimensional hypersphere with unit radius and center at the origin. Therefore, the above problem assigns non-negative weights $x_{j}$ to the $P_{j}$ columns so that its origin is the rescaled gravity center. Note that any linear programming problem can be reduced to problem (\ref{a1}) (see \cite{G1}).

Figure 1 shows the algorithm. In the $k-th$ iteration, the residual is $Px^{k} = b^{k-1}$. The next residual $b^{k}$ is the projection of the origin in the segment of line joining $b^{k-1}$ to $P_{s},$ where $P_{s}$ is the column that forms the largest angle with the residual $b^{k-1}$. The triangle $b^{k-1}0b^{k}$ has as hypotenuse $0b^{k-1}$ and cathetus $0b^{k}$, and thus, $||b^{k}||<||b^{k-1}||$ for all iterations.

\begin{figure}[htb]
\centering
\includegraphics[width=4.0cm]{figalgVon.eps}
\caption{Illustration of von Neumann's algorithm.}
\label{fg1}
\end{figure}

The steps of von Neumann's algorithm are described below:


\begin{algorithm}[ht]
{\bf von Neumann's Algorithm}

{\bf Given:} \hspace{0.1cm}$x^{0}\geq 0, \hspace{0.1cm} \hbox{with}\hspace{0.1cm} e^{t}x^{0}=1.$ Compute \hspace{0.1cm} $b^{0}=Px^{0}.$

{\bf For} $k=1,2,3,...$ {\bf Do:}

$1)$ Compute:

\hspace{1.5cm} $s=\hbox{argmin}_{j=1,\ldots,n}\{P_{j}^{t}b^{k-1}\}$,

\hspace{1.5cm} $v^{k-1}=P^{t}_{s}b^{k-1}$.

$2)$ If \hspace{0.1cm} $v^{k-1}>0,$ then {\bf STOP}. The problem\hspace{0.1cm} (\ref{a1}) \hspace{0.1cm} is infeasible.

$3)$ Compute:

\hspace{1.5cm} $u^{k-1}=||b^{k-1}||,$ \hspace{1.5cm} $\lambda=\frac{1-v^{k-1}}{(u^{k-1})^{2}-2v^{k-1}+1}.$

$4)$ Update:

\hspace{1.5cm} $b^{k}=\lambda b^{k-1}+(1-\lambda)P_{s},$ \hspace{1.5cm} $x^{k}=\lambda x^{k-1}+(1-\lambda)e_{s},$

\hspace{0.5cm} where $e_{s}$ is the standard basis vector with 1 in the $s$-th coordinate.

\noindent{\bf End}\\

\end{algorithm}

In computational experiments, the stopping criterion was $||b^{k}-b^{k-1}||/||b^{k}||< \epsilon$, where $\epsilon$ is a specified tolerance, and $x^{0}_{j}=\frac{1}{n}, j=1, \ldots, n$ was considered.

The  effort per iteration of von Neumann's algorithm is dominated by the matrix-vector multiplication, required for the selection of the column $P_{s}$, which is $O(mn).$ The number of operations required in this multiplication is significantly lower, if the $P$ matrix is sparse. For more details, see \cite{G1,G2}.

\section{The Weight Reduction and the Optimal pair Adjustment Algorithms}

In this section, two algorithms developed by Gon\c calves \cite{G2} are described. The algorithms are based on von Neumann's algorithms and were developed to improve convergence. They are the weight-reduction algorithm and the optimal pair adjustment algorithm.

In the weight-reduction algorithm, the residual $b^{k-1}$ is moved closer to the origin $0$ by increasing the weight $x_j$ of some columns $P_j$ or decreasing the weight $x_i$ of other columns $P_i$. Figure \ref{frg1} shows the geometric interpretation of the weight-reduction algorithm.

\begin{figure}[htb]\label{frg1}
\centerline{\includegraphics[width=5.0cm]{figalgrepeso.eps}}
\caption{Illustration of the weight-reduction algorithm.}
\end{figure}


At each iteration, the residual $b^{k-1}$ moves in the direction $P_{s^{+}}-P_{s^{-}}$ where the columns $P_{s^{+}}$ and $P_{s^{-}}$ make the largest and smallest angle with $b^{k-1}$, respectively. The new residual $b^{k}$ is the projection of the origin in that line. Only the weights $x_{+}$ and $x_{-}$ will be changed. There is no guarantee that an iteration of this algorithm will improve as much as an iteration of the von Neumann  algorithm \cite{G2}.

However, the OPAA also developed by Gon\c calves improves the residual at least as much as the von Neumann algorithm \cite{G2}.

First, the OPAA calculates the columns $P_{s^{+}}$ and $P_{s^{-}}$. Then, the algorithm calculates the values $x^{k}_{s^{+}}$, $x^{k}_{s^{-}}$ and $\lambda$ where $x^{k}_{j}=\lambda x^{k-1}_{j}$ for all $j\neq s^{+}$ and $j\neq s^{-}$  that minimize the distance between $b^{k}$ and the origin, subject to the convexity and the non-negativity constraints. The solution to this optimization problem is easy to find by examining the Karush-Kuhn-Tucker (KKT) conditions (see \cite{G2}).

The optimal pair adjustment algorithm is described below.

\begin{algorithm}
{\bf Optimal Pair Adjustment Algorithm}	\\

{\bf Given:} \hspace{0.1cm} $x^{0}\geq 0$, \hspace{0.1cm} with \hspace{0.1cm} $e^{t}x^{0}=1$. Compute \hspace{0.1cm} $b^{0}=Px^{0}$.

{\bf For} $k=1,2,3,...$ {\bf Do:}

$1)$ Compute:

\hspace{1.5cm} $s^{+}=\hbox{argmin}_{j=1,\ldots,{n}}\{P_{j}^{t}b^{k-1}\}$,

\hspace{1.5cm} $s^{-}=\displaystyle{\hbox{argmax}_{j=1,\ldots,{n}}}\{P_{j}^{t}b^{k-1}|\hspace{0.2cm} x_{j}>0 \},$

\hspace{1.5cm} $v^{k-1}=P^{t}_{s^{+}}b^{k-1}.$
$2)$ If \hspace{0.1cm} $v^{k-1}>0,$ then {\bf STOP}; the problem (\ref{a1}) is infeasible.

$3)$ Solve the problem:

\vspace{-0.5cm}

\begin{equation}
\begin{array}{ll}
\label{a10}
  \hbox{minimize}  & ||\overline{b}||^{2}\\
  \hbox{s.t.}      & \lambda_{0}(1-x^{k-1}_{s^{+}}-x^{k-1}_{s^{-}})+\lambda_{1}+\lambda_{2}=1, \\
                   &\lambda_{i}\geq 0,  \hspace{0.1cm} \hbox{for} \hspace{0.1cm} i=0,1,2.
\end{array}
\end{equation}

\hspace{0.5cm} where, $\overline{b}=\lambda_{0}(b^{k-1}-x^{k-1}_{s^{+}}P_{s^{+}}-x^{k-1}_{s^{-}}P_{s^{-}})+\lambda_{1}P_{s^{+}}+\lambda_{2}P_{s^{-}}.$

$4)$ Update:

\begin{equation*}
\begin{array}{l}
 b^{k}=\lambda_{0}(b^{k-1}-x^{k-1}_{s^{+}}P_{s^{+}}-x^{k-1}_{s^{-}}P_{s^{-}})+\lambda_{1}P_{s^{+}}+\lambda_{2}P_{s^{-}},\\
 u^{k}=||b^{k}||,\\
 x^{k}_{j}=\left\{%
\begin{array}{ll}
  \lambda_{0}x^{k-1}_{j}, & j \neq s^{+}\hspace{0.1cm} \hbox{e} \hspace{0.1cm} j \neq s^{-},  \\
  \lambda_{1}, & j=s^{+}, \\
  \lambda_{2}, & j=s^{-}.             \\
\end{array}
\right.\\
k=k+1.
\end{array}
\end{equation*}

\noindent{\bf End}

\end{algorithm}

The OPAA modifies all the weights $x^{k}_{i's}$, while the weight reduction algorithm modifies only the weights of columns $P_{s^{+}}$ and $P_{s^{-}}$. This is the main difference between them.


\section{Optimal Adjustment Algorithm for $p$ Coordinates}

This section presents the optimal adjustment algorithm for $p$ coordinates developed by Silva \cite{J01}. This algorithm was developed by generalizing the idea presented in the subproblem (\ref{a10}) of the OPAA. This subproblem can be generalized, and instead of using two columns to formulate it, any number of columns can be used, thus assigning relevancy to any number of variables. For each value of $p$, a different algorithm can be formulated. Thus, a family of algorithms was developed.

The $p$ variables can be chosen by a different method according to the problem. A natural choice is to take $p/2$ columns that make the largest angle with the vector $b^{k}$ and $p/2$ columns that make the smallest angle with the vector $b^{k}$. If $p$ is an odd number, an extra column for the set of vectors is taken, which form the largest angle with the vector $b^{k}$, for instance.

The optimal adjustment algorithm for $p$ coordinates computes better direction than the OPAA. It still maintains simplicity, since at each iteration, only a matrix-vector multiplication is performed and a small linear system with a positive definite matrix is solved.

The steps of the optimal adjustment algorithm for $p$ coordinates are similar to those for the OPAA. First, the $s_1$ and $s_2$ columns are identified; the $s_1$ and $s_2$ columns form the largest and the smallest angle with the residual $b^{k-1}$, respectively, where $s_1 + s_2=p$ and $p$ is the number of columns to be prioritized. Next, the subproblem is solved, and then, the residual and the current point are updated.

\begin{algorithm}[ht]
{\bf Optimal Adjustment Algorithm for $p$ Coordinates}
\\

{\bf Given:} $\hspace{0.1cm}x^{0}\geq 0, \hspace{0.1cm} \hbox{with}\hspace{0.1cm} e^{t}x^{0}=1$. Compute \hspace{0.1cm} $b^{0}=Px^{0}.$

{\bf For} $k=1,2,3,...$ {\bf Do:}

$1)$ Compute:

\begin{equation*}
\begin{array}{l}
\{P_{\eta^{+}_{1}},\ldots,P_{\eta^{+}_{s_{1}}}\}\hspace{0.1cm} \hbox{forming the largest angle with the vector} \hspace{0.1cm} b^{k-1}. \\
\{P_{\eta^{-}_{1}},\ldots,P_{\eta^{-}_{s_2}}\} \hspace{0.2cm} \hbox{forming the smallest angle with the vector} \hspace{0.1cm} b^{k-1}\hspace{0.1cm} \hbox{ such as}\\
x^{k-1}_{i}>0,i=\eta^{-}_{1},\ldots,\eta^{-}_{s_{2}},\hspace{0.05cm} \hbox{where}\hspace{0.05cm} s_{1}+s_{2}=p. \\
v^{k-1}=\hbox{minimum}_{i=1,\ldots,s_{1}}\{P^{t}_{\eta^{+}_{i}}b^{k-1}\}. \\
\end{array}
\end{equation*}

$2)$ If \hspace{0.1cm} $v^{k-1}>0$, then {\bf STOP}; the problem\hspace{0.1cm} (\ref{a1}) \hspace{0.1cm} is infeasible.
%\end{algorithm}

%\begin{algorithm}[ht]
$3)$ Solve the problem:

\vspace{-0.3cm}
\begin{equation}\label{a13}
\begin{array}{ll}
\hbox{minimize} & \frac{1}{2}||A\lambda||^{2}\\
\hbox{s.t.}     & c^{t}\lambda=1, \\
                & \lambda \geq 0  ,    \\
\end{array}
\end{equation}

where  $A=\left[\overline{w} \hspace{0.1cm}  P_{\eta^{+}_{1}} \ldots P_{\eta^{+}_{s_1}} P_{\eta^{-}_{1}} \ldots P_{\eta^{-}_{s_2}} \right],
\overline{w}=b^{k-1}-\displaystyle{\sum_{i=1}^{s_1}}x^{k-1}_{\eta^{+}_{i}}P_{\eta^{+}_{i}}-\displaystyle{\sum_{j=1}^{s_2}}x^{k-1}_{\eta^{-}_{j}}P_{\eta^{-}_{j}},
$ $\lambda=\left(\lambda_{0},\lambda_{\eta^{+}_{1}}, \ldots, \lambda_{\eta^{+}_{s_{1}}},\lambda_{\eta^{-}_{1}}, \ldots,\lambda_{\eta^{-}_{s_{2}}}\right),$ $
c=\left(c_{1},1,\ldots,1 \right)$ and $c_{1}=1-\displaystyle{\sum_{i=1}^{s_1}}x^{k-1}_{\eta^{+}_{i}}-\sum_{j=1}^{s_2}x^{k-1}_{\eta^{-}_{j}}.$

$4)$ Update:
\begin{equation*}
\begin{array}{l}
b^{k}=\lambda_{0}\left(b^{k-1}-\displaystyle{\sum_{i=1}^{s_1}}x^{k-1}_{\eta^{+}_{i}}P_{\eta^{+}_{i}}-\displaystyle{\sum_{j=1}^{s_2}}x^{k-1}_{\eta^{-}_{j}}P_{\eta^{-}_{j}}\right)+\displaystyle{\sum_{i=1}^{s_1}}\lambda_{\eta^{+}_{i}}P_{\eta^{+}_{i}}+\displaystyle{\sum_{j=1}^{s_2}}\lambda_{\eta^{-}_{j}}P_{\eta^{-}_{j}},\\
u^{k}=||b^{k}||,\\
x^{k}_{j}=\left\{%
\begin{array}{ll}
  \lambda_{0}x^{k-1}_{j}, & j \notin \{{\eta^{+}_{1}},\ldots,{\eta^{+}_{s_{1}}},{\eta^{-}_{1}},\ldots,{\eta^{-}_{s_{2}}}\},  \\
  \lambda_{\eta^{+}_{i}}, & j=\eta^{+}_{i},i=1,\ldots,s_1, \\
  \lambda_{\eta^{-}_{j}}, & j=\eta^{-}_{j},j=1,\ldots,s_2. \\
\end{array}%
\right.\\
k=k+1.
\end{array}%
\end{equation*}

\noindent{\bf End}\\
\end{algorithm}

%\newpage
Subproblem (\ref{a13}) was built in such a way that the residual has the maximum decrease, such that,  $x^{0}\geq 0, \hspace{0.1cm} \hbox{with}\hspace{0.1cm} e^{t}x^{0}=1$.


\subsection{Relation between von Neumann's algorithm and the
algorithm with \texorpdfstring{$p=1$}{Lg} and geometric interpretation}

The optimal adjustment algorithms for {\it p} coordinates with $p=1$ and von Neumann's algorithm generate the same points $x^{k}$ and the same residual $b^{k}$ as shown in Figure \ref{fg1}.

The optimal $\lambda$ for von Neumann's algorithm is calculated by projection of the origin in the segment of line joining $b^{k-1}$ to $P_{s}.$

In the algorithm when $ p = 1 $, the following subproblem is solved.

\begin{equation}\label{vis2}
\begin{array}{ll}
  \hbox{minimize}  & ||\overline{b}||^{2}\\
  \hbox{s.t.}      & \lambda_{0}(1-x^{k-1}_{s^{}})+\lambda_{1}=1, \\
                   &\lambda_{i}\geq 0,  \hspace{0.1cm} \hbox{for} \hspace{0.1cm} i=0,1.\\
\end{array}%
\end{equation}

\noindent where, $\overline{b}=\lambda_{0}(b^{k-1}-x^{k-1}_{s^{}}P_{s^{}})+\lambda_{1}P_{s^{}}.$

the subproblem can be rewritten  (\ref{vis2}) as follows:\\
$$ \lambda_{0}(1-x^{k-1}_{s^{}})+\lambda_{1}=1 \Leftrightarrow \lambda_{1}=1-\lambda_{0}(1-x^{k-1}_{s^{}})\geq 0$$
$$\overline{b}=\lambda_{0}(b^{k-1}-x^{k-1}_{s^{}}P_{s^{}})+\lambda_{1}P_{s^{}}=\lambda_{0}(b^{k-1}-x^{k-1}_{s^{}}P_{s^{}})+(1-\lambda_{0}(1-x^{k-1}_{s^{}}))P_{s^{}}=
\lambda_{0}b^{k-1}+(1-\lambda_{0})P_{s}.$$
Thus, the problem (\ref{vis2}) will be:

\begin{equation}\label{vis3}
\begin{array}{ll}
  \hbox{minimize}  & ||\overline{b}||^{2}\\
  \hbox{s.t.}      &  \lambda \in \left[0,\frac{1}{1-x^{k-1}_s}\right]\\
                   &\\
\end{array}%
\end{equation}
where, $\overline{b}=\lambda b^{k-1}+(1-\lambda)P_{s^{}}.$

If the term $\frac{1}{1-x^{k-1}_s}$ is larger than $1$, then there will be an increase in the number of possible solutions in comparison with von Neumann's algorithm. Although, the geometric interpretation of the algorithm with $p=1$ presented in Figure \ref{fgp} show that the optimal $\lambda$ is the same for both algorithms.

\begin{figure}[ht]
\centerline{\includegraphics[scale=0.40]{figvisaop1.eps}}
\caption{Illustration of the algorithm with $p=1$.}
\label{fgp}
\end{figure}

In Step $4$ of von Neumann's algorithm, $x^k$ is updated by

$x^{k}_{j}=\left\{%
\begin{array}{ll}
  \lambda x^{k-1}_{j}, & j \neq s  \\
  \lambda (x^{k-1}_{s}-1)+1, & j=s. \\
\end{array}
\right.$

In Step $4$ of the algorithm with $p=1$, $x^k$ is updated by

$x^{k}_{j}=\left\{%
\begin{array}{ll}
  \lambda x^{k-1}_{j}, & j \neq s  \\
  1-\lambda (1-x^{k-1}_{s}), & j=s. \\
\end{array}
\right.$

Therefore, the algorithm with $p=1$ and von Neumann's algorithm generate the same points $x^{k}$ and the same residual $b^{k}$.

For $p=2$, the subproblem (\ref{a13}) is reduced to the following form:

\begin{equation}\label{p2}
\begin{array}{ll}
  \hbox{minimize}  & ||\overline{b}||^{2}\\
  \hbox{s.t.}      & \lambda_{0}(1-x^{k-1}_{s^{+}}-x^{k-1}_{s^{-}})+\lambda_{1}+\lambda_{2}=1, \\
                   &\lambda_{i}\geq 0,  \hspace{0.1cm} \hbox{for} \hspace{0.1cm} i=0,1,2.\\
\end{array}%
\end{equation}
where, $\overline{b}=\lambda_{0}(b^{k-1}-x^{k-1}_{s^{+}}P_{s^{+}}-x^{k-1}_{s^{-}}P_{s^{-}})+\lambda_{1}P_{s^{+}}+\lambda_{2}P_{s^{-}}.$

The term $\overline{b}$ can be rewritten as:

\begin{equation*}
\begin{array}{rl}
     \overline{b}= & \lambda_{0}(b^{k-1}-x^{k-1}_{s^{+}}P_{s^{+}}-x^{k-1}_{s^{-}}P_{s^{-}})+\lambda_{1}P_{s^{+}}+\lambda_{2}P_{s^{-}} \\
     = & \lambda_{0}b^{k-1}+(\lambda_{1}-\lambda_{0}x^{k-1}_{s^{+}})P_{s^{+}}+(\lambda_{2}-\lambda_{0}x^{k-1}_{s^{-}})P_{s^{-}} \\
\end{array}
\end{equation*}


\noindent Thus, $\overline{b}(\lambda_{0},\lambda_{1},\lambda_{2})$ is a linear transformation. When the vectors $\{(b^{k-1}-x^{k-1}_{s^{+}}P_{s^{+}}-x^{k-1}_{s^{-}}P_{s^{-}}),$ $P_{s^{+}}$ e $P_{s^{-}}\}$ are linearly independent, such linear transformation is injective. It transforms the triangle generated by $\lambda_{0}(1-x^{k-1}_{s^{+}}-x^{k-1}_{s^{-}})+\lambda_{1}+\lambda_{2}=1$ and its interior into the triangle whose vertices are $P_{s^{+}},$ $P_{s^{-}}$ and $P_v=\frac{1}{(1-x^{k-1}_{s^{+}}-x^{k-1}_{s^{-}})}(b^{k-1}-x^{k-1}_{s^{+}}P_{s^{+}}-x^{k-1}_{s^{-}}P_{s^{-}})$, and its interior.

Therefore, the optimal residual $b^k$ is the projection of the origin on this triangle. The geometric interpretation of the algorithm with $p=2$ is given in Figure \ref{fgp1}.

\begin{figure}[ht]
\centerline{\includegraphics[scale=0.25]{figvisaop2.eps}}
\caption{Illustration of the algorithm with $p=2$.}
\label{fgp1}
\end{figure}

For $p > 2$ coordinates, the subproblem (\ref{a13}) will be

\begin{equation*}
\begin{array}{rl}
\overline{b}= & \lambda_{0}\left(b^{k-1}-\displaystyle{\sum_{i=1}^{s_1}}x^{k-1}_{\eta^{+}_{i}}P_{\eta^{+}_{i}}-\sum_{j=1}^{s_2}x^{k-1}_{\eta^{-}_{j}}P_{\eta^{-}_{j}}\right)+\displaystyle{\sum_{i=1}^{s_1}}\lambda_{\eta^{+}_{i}}P_{\eta^{+}_{i}}+\sum_{j=1}^{s_2}\lambda_{\eta^{-}_{j}}P_{\eta^{-}_{j}} \\
 =&\lambda_{0}b^{k-1}+\displaystyle{\sum_{i=1}^{s_1}}(\lambda_{\eta^{+}_{i}}-\lambda_{0}x^{k-1}_{\eta^{+}_{i}})P_{\eta^{+}_{i}}+\sum_{j=1}^{s_2}(\lambda_{\eta^{-}_{j}}-\lambda_{0}x^{k-1}_{\eta^{-}_{j}})P_{\eta^{-}_{j}} \\
\end{array}
\end{equation*}

with $\lambda_{0}+\displaystyle{\sum_{i=1}^{s_1}}(\lambda_{\eta^{+}_{i}}-\lambda_{0}x^{k-1}_{\eta^{+}_{i}}) +\sum_{j=1}^{s_2}(\lambda_{\eta^{-}_{j}}-\lambda_{0}x^{k-1}_{\eta^{-}_{j}})=1,$ then $\overline{b}$ is also an affine combination.  Then, the optimal residual $b^k$ is the projection of the origin on the affine space region with vertices in the $p$ columns and in vector $P_v,$ and $$P_v=\frac{1}{1-\displaystyle{\sum_{i=1}^{s_1}}x^{k-1}_{\eta^{+}_{i}}-\sum_{j=1}^{s_2}x^{k-1}_{\eta^{-}_{j}}}\left(b^{k-1}-\displaystyle{\sum_{i=1}^{s_1}}x^{k-1}_{\eta^{+}_{i}}P_{\eta^{+}_{i}}-\sum_{j=1}^{s_2}x^{k-1}_{\eta^{-}_{j}}P_{\eta^{-}_{j}}\right).$$


\subsection{Subproblem Solution Using Interior Point Methods}

In \cite{G2}, Gon\c calves solved the subproblem (\ref{a13}) by checking all feasible solutions that satisfies the KKT condition of this subproblem and there are $2^{3}-1$ possible solutions, see \cite{G2}. For the optimal adjustment algorithm for $p$ Coordinates, the possible solutions of the subproblem (\ref{a13}) are $2^{p}-1$  \cite{Ca2}. The strategy used by Gon\c calves in \cite{G2} is inefficient even for small values of $p$; the number of cases will increase exponentially. The subproblem (\ref{a13}) can be solved by interior point methods.

Consider the subproblem (\ref{a13}). The KKT equations from the problem (\ref{a13}) are given by:

\begin{equation} \label{int}
\begin{array}{r}
  A^{t}A\lambda+ c\tau-\mu=0, \\
  \mu^{t}\lambda=0, \\
  c^{t}\lambda-1=0, \\
  -\lambda \leq 0,
\end{array}
\end{equation}

\noindent where $\tau$ is a vector of free variables and $0 \leq \mu$. The vectors $\tau$ and $\mu$ are the Lagrange multipliers for equality and inequality, respectively, and $A^{t}A$  is a $(p+1)\times (p+1)$ matrix.

The path-following interior point method is used to solve the problem (\ref{int}). At each iteration of the interior point method, the linear system of Equation (\ref{IPLS}) is solved to compute the search directions $(d\lambda, d\tau, d\mu)$:

\begin{equation}\label{IPLS}
\left[%
\begin{array}{ccc}
  A^{t}A & c& -Id \\
  U & 0 & \Lambda \\
  c^{t} & 0 & 0 \\
\end{array}%
\right]\left[%
\begin{array}{c}
  d\lambda \\
  d\tau \\
  d\mu \\
\end{array}%
\right]=\left[%
\begin{array}{c}
  r_{1} \\
  r_{2} \\
  r_{3} \\
\end{array}%
\right]
\end{equation}

\noindent where $U=diag(\mu),$ $\Lambda=diag(\lambda),$ $r_{1}=\mu-c\tau-A^{t}A\lambda,$  $r_{2}=-\tau^{t}\lambda,$ and $r_{3}=1-c^{t}\lambda.$

The directions $d\lambda,$ $d\tau$ and $d\mu$ are given by:

$$
\begin{array}{l}
 d\mu=\Lambda^{-1}r_{2}-\Lambda^{-1}U d\lambda,\\
 d\lambda=(A^{t}A+\Lambda^{-1}U)^{-1}r_{4}-(A^{t}A+\Lambda^{-1}U)^{-1}c d\tau,\\
  c^{t}(A^{t}A+\Lambda^{-1}U)^{-1}c d\tau=c^{t}(A^{t}A+\Lambda^{-1}U)^{-1}r_{4}-r_{3},\\
\end{array}
$$

\noindent where $r_{4}=r_{1}+\Lambda^{-1}r_{2}.$

Consider $l_1=(A^{t}A+\Lambda^{-1}U)^{-1}c $ and $l_2=(A^{t}A+\Lambda^{-1}U)^{-1}r_{4}$; then, the solution of the linear systems $(A^{t}A+\Lambda^{-1}U)l_1=c$  and $ (A^{t}A+\Lambda^{-1}U)l_2=r_{4} $ will be necessary to compute the directions.

The matrix $A^{t}A+\Lambda^{-1}U$ is a symmetric $(p+1)\times(p+1)$ positive definite. Both systems can be solved with the same Cholesky factorization.


\subsection{Theoretical Properties of the Family of Algorithms}

The theorem $3.1$ in \cite{G2} ensures that the OPAA converges, in the worst case, with the same rate of convergence as von Neumann's algorithm. This result can be extended for the optimal adjustment algorithm for $p$ coordinates, and an increase in the value of $p$ leads to a stronger algorithm with improved performance. This is shown in Theorem \ref{tttu1}. Only the second part will be proved.

\begin{teoTEMA} \label{tttu1}
The residual $||b^{k}||$ after an iteration of the optimal adjustment algorithm for $p$ coordinates is, in the worst case, equal to the residual after an iteration of von Neumann's algorithm. Furthermore, suppose that $||b^{k}_{p_{1}}||$ is the residual after an iteration of the optimal adjustment algorithm for $p_{1}$ coordinates, $||b^{k}_{p_{2}}||$ is the residual after an iteration of the optimal adjustment algorithm for $p_{2}$ coordinates, and $p_{1} \leq p_{2}\leq n$, then $ ||b^{k}_{p_{1}}||  \leq ||b^{k}_{p_{2}}|| $ where $n$ is the number of columns $P$.
\end{teoTEMA}

\begin{proof}
Let $k\geq 1$ and $b^{k-1}$ be the residual at the beginning of the iteration $k$. Further, $\{P_{\eta^{+}_{1}},\ldots,P_{\eta^{+}_{s_{1}}}\}$ and
$\{P_{\eta^{-}_{1}},\ldots,P_{\eta^{-}_{s_{2}}}\}$ are sets of vectors forming the largest and the smallest angles with the vector $b^{k-1}$, respectively, for the algorithm prioritizing the $p_{2}$ coordinates, where $s_{1}+s_{2}=p_{2}$; and $\{P_{\eta^{+}_{1}},\ldots,P_{\eta^{+}_{s_{3}}}\}$ and
$\{P_{\eta^{-}_{1}},\ldots,P_{\eta^{-}_{s_{4}}}\}$ are sets of vectors forming the largest and the smallest angles with the vector $b^{k-1}$, respectively, for the algorithm prioritizing the $p_{1}$ coordinates, where $s_{3}+s_{4}=p_{1}$.

After the $k$-th iteration in the optimal adjustment for the $p_{2}$ coordinates, the residual $b^{k}_{p_{2}}$ will be
\vspace{-0.2cm}

$$	
\begin{array}{l}
b^{k}_{p_{2}}=\overline{\lambda}_{1}
\left( b^{k-1}-\displaystyle{\sum_{i=1}^{s_{1}}} x^{k-1}_{\eta^{+}_{i}}P_{\eta^{+}_{i}}-\sum_{j=1}^{s_{2}}x^{k-1}_{\eta^{-}_{j}}P_{\eta^{-}_{j}}\right)
+\displaystyle{\sum_{i=1}^{s_{1}}}\overline{\lambda}_{\eta^{+}_{i}}P_{\eta^{+}_{i}}+\sum_{j=1}^{s_{2}}\overline{\lambda}_{\eta^{-}_{j}}P_{\eta^{-}_{j}},\\
\end{array}
$$

\noindent where $(\overline{\lambda}_{1},\overline{\lambda}_{\eta^{+}_{1}},\ldots,\overline{\lambda}_{\eta^{+}_{s_{1}}},\overline{\lambda}_{\eta^{-}_{1}},\ldots,\overline{\lambda}_{\eta^{-}_{s_{2}}})$ is the optimal solution of the subproblem (\ref{a13}) prioritizing $p_{2}$ coordinates.

The optimal solution of the subproblem (\ref{a13}) prioritizing $p_{1}$ coordinates

$$\begin{array}{l}
(\widetilde{\lambda}_{1},\widetilde{\lambda}_{\eta^{+}_{1}},\ldots,\widetilde{\lambda}_{\eta^{+}_{s_{3}}},\widetilde{\lambda}_{\eta^{-}_{1}},\ldots,\widetilde{\lambda}_{\eta^{-}_{s_{4}}}),
\end{array}$$

\noindent is also a feasible solution for the subproblem (\ref{a13}) when $p_{2}$ coordinates are prioritized.

Therefore,
\vspace{-0.2cm}

$$
\begin{array}{l}
\hspace{-0.6cm} \left\| \widetilde{\lambda}_{1}\left(b^{k-1}-\displaystyle{\sum_{i=1}^{s_{3}}}x^{k-1}_{\eta^{+}_{i}}P_{\eta^{+}_{i}}-\sum_{j=1}^{s_{4}}x^{k-1}_{\eta^{-}_{j}}P_{\eta^{-}_{j}}\right)+\displaystyle{\sum_{i=1}^{s_{3}}}\widetilde{\lambda}_{\eta^{+}_{i}}P_{\eta^{+}_{i}}+\sum_{j=1}^{s_{4}}\widetilde{\lambda}_{\eta^{-}_{j}}P_{\eta^{-}_{j}} \right\| = \\
\hspace{-0.6cm} = \left\|b^{k}_{p_{1}}\right\|\geq \left\|b^{k}_{p_{2}}\right\|,
\end{array}
$$

\noindent where \hspace{0.1cm} $b^{k}_{p_{1}}$
is the residual after an iteration of the optimal adjustment algorithm for the $p_{1}$ coordinates. Consequently, the reduction of the residual after an iteration of the optimal adjustment algorithm for the $p_{2}$ coordinates is, in the worst case, equal to the reduction of the residual after an iteration of the optimal adjustment algorithm for the $p_{1}$ coordinates.

\end{proof}

This theorem does not ensure that one iteration of family of algorithms is better than one iteration of von Neumann's algorithm. In the next section, we give the sufficient conditions for that to happen.


\subsection{Sufficient Condition for \texorpdfstring{$||b^{k}||<||b^{k}_{v}||$}{Lg}}
Let $b^{k}$ be the residual of the algorithm with $p=2$ in the iteration $k$, and let $P_{s^+}$ and $P_{s^{-}}$ be the columns forming the largest and smallest angles with the vector $b^{k-1}.$ If the projection of the origin is in the interior of the triangle $b^{k}P_{s^{+}}P_{s^{-}}$ and coincides with the projection of the origin in the plane determined by $b^{k-1}, P_{s^{+}}$ and $P_{s^{-}},$ then $||b^{k}||<||b^{k}_{v}||,$ where $b^{k}_{v}$ is the residual of von Neumann's algorithm in the iteration $k$. In fact, we can see this clearly in Figure \ref{fgSC}, noting that the triangle $0b^{k}b^{k}_{v}$ has the $\overline{0b}^{k}_{v}$ hypotenuse and side $\overline{0b}^{k}.$

\begin{figure}[ht]
\centerline{\includegraphics[scale=0.25]{figcondicaosuficiente.eps}}
\caption{Illustration of the Sufficient Condition.}
\label{fgSC}
\end{figure}

Thus, we concluded that under these conditions, the OPAA for $p$ coordinates has better performance than von Neumann's algorithm.


\section{Computational Experiments}

The performance of the family of algorithms for moderate values of {\it p} is compared with OPAA, when $p = 2$. The choice for moderate {\it p} values stems from the fact that for larger {\it p} values, the cost of solution of the subproblem in each iteration becomes noticeable.

For the experiments, a collection of $151$ linear programming problems were used. The problems are divided into $95$ Netlib problems \cite{netlib01}, $16$ Kennington problems \cite{King}, and $40$ other problems supplied by Gon\c calves \cite{G2}. These experiments were performed on an Intel Core 2 Quad Q9550 $2.83$ GHz and 4GB of RAM machine in a Linux using the gcc compiler.

\subsection{Implementation Details}

The family of algorithms was implemented in C using the format of the problem given in \cite{G1} Subsection $2.5.1.$ The matrix $ P $ is not built explicitly. More precisely, it is divided into two blocks. Only one of these blocks is considered in Step $ 1 $, at each iteration, to find $ s_1 $ and $ s_2 $ columns, which form the largest and smallest angles with residual $b^{k}$, respectively.

To solve the subproblem, the classical path-following interior point method was implemented in C. The perturbation in the complementarity is $ \frac{\mu^t\lambda}{(p +1) ^ 2}.$

The initial point was the point with all coordinates equal to one. The precision of the solution was $10^{-12}$; the linear equation $e^{t}x=1$ of the problem (\ref{a1}) makes each component ${x}_{j} $ small, and thus, if a solution with a good precision is not computed, the method may not work properly.

\subsection{Experiment Design}

The experiments were performed following the steps presented by Gon\c calves, Storer and Gondzio in \cite{G2}:


\begin{enumerate}
\item Initially, von Neumann's algorithm is run on all problems;
\item Next, when the relative difference between $||b^{k-1}||$ and $||b^{k}||$ was less than $0.5\%$, the time $t1$(CPU seconds) and number of iterations (up to $t1$) are recorded .
\item Additionally, the times $t2$, $t3$, $t4$, and $t5$ (CPU seconds), which respectively correspond to 3, 5, 10 and 20 times the number of iterations in $t1$ are also recorded.
\item Next, the optimal adjustment algorithm for $p$ coordinates, where $p = 2$, $p = 4$, $p = 10$, and $p = 20$ is ran on the test problems.
\item Finally, for the $ti$ times, $i = 1, \ldots, 5$, the residual $||b^k||$ is recorded.
\end{enumerate}


Table \ref{tb05} shows the problems and the results with time $t5$, which was used in the performance profile.

\begin{table}[h!]
\caption{Problems test and time t5}
\label{tb05}
\centering
{\tiny
\begin{tabular}{@{}|lrrr|lrrr|}
\hline
%\toprule
{\bf Problem} & {\bf Line}  & {\bf Column} & {\bf Time t5}	& {\bf Problem} & {\bf Line}	& {\bf Column}	& {\bf Time t5}		\\
\hline
%\midrule
25fv47 	&	769	&	1821	&	0.060000	&	ship04l	&	292	&	1905	&	0.030000	\\
80bau3b	&	1965	&	10701	&	0.240000	&	ship04s	&	216	&	1281	&	0.090000	\\
adlittle&	53	&	134	&	0.000001	&	ship08l	&	470	&	3121	&	0.120000	\\
afiro	&	25	&	48	&	0.000001	&	ship08s	&	276	&	1604	&	0.100000	\\
agg	&	319	&	404	&	0.000001	&	ship12l	&	610	&	4171	&	0.040000	\\
agg2	&	455	&	689	&	0.010000	&	ship12s	&	340	&	1943	&	0.150000	\\
agg3	&	455	&	689	&	0.020000	&	sierra	&	1129	&	2618	&	0.080000	\\
bandm	&	211	&	366	&	0.000001	&	stair	&	356	&	531	&	0.060000	\\
beaconfd&	73	&	148	&	0.020000	&	standata&	292	&	582	&	0.050000	\\
blend	&	66	&	101	&	0.020000	&	standgub&	292	&	582	&	0.050000	\\
bnl1	&	558	&	1439	&	0.020000	&	standmps&	388	&	1146	&	0.040000	\\
bnl2	&	1848	&	3800	&	0.080000	&	stocfor1&	94	&	142	&	0.010000	\\
boeing1	&	294	&	660	&	0.040000	&	stocfor2&	1968	&	2856	&	0.030000	\\
boeing2	&	125	&	264	&	0.000001	&	stocfor3&	15336	&	22202	&	0.190000	\\
bore3d	&	64	&	90	&	0.010000	&	truss	&	1000	&	8806	&	0.050000	\\
brandy	&	116	&	216	&	0.020000	&	tuff	&	246	&	553	&	0.120000	\\
capri	&	235	&	421	&	0.010000	&	vtp\_base&	46	&	82	&	0.000001	\\
cycle	&	1400	&	2749	&	0.020000	&	wood1p	&	171	&	1718	&	0.180000	\\
czprob	&	661	&	2705	&	0.040000	&	woodw	&	708	&	5364	&	0.090000	\\
d2q06c	&	2012	&	5561	&	0.130000	&	cre-a	&	2994	&	6692	&	0.040000	\\
d6cube	&	403	&	5443	&	0.480000	&	cre-b	&	5336	&	36382	&	0.230000	\\
degen2	&	444	&	757	&	0.050000	&	cre-c	&	2375	&	5412	&	0.030000	\\
degen3	&	1503	&	2604	&	0.260000	&	cre-d	&	4102	&	28601	&	0.190000	\\
dfl001	&	5907	&	12065	&	1.440000	&	ken-07	&	1427	&	2603	&	0.030000	\\
e226	&	161	&	392	&	0.020000	&	ken-11	&	10061	&	16709	&	0.540000	\\
etamacro&	331	&	666	&	0.020000	&	ken-13	&	22519	&	36546	&	2.060000	\\
fffff800&	313	&	817	&	0.040000	&	ken-18	&	78823	&	128395	&	21.310000	\\
finnis	&	359	&	775	&	0.010000	&	osa-07	&	1047	&	24911	&	0.160000	\\
fit1d	&	24	&	1047	&	0.010000	&	osa-14	&	2266	&	54535	&	0.380000	\\
fit1p	&	678	&	1706	&	0.010000	&	osa-30	&	4279	&	103978	&	0.000001	\\
fit2d	&	25	&	10387	&	0.280000	&	osa-60	&	10209	&	242411	&	1.840000	\\
fit2p	&	3170	&	13695	&	0.240000	&	pds-02	&	2603	&	7333	&	0.070000	\\
forplan	&	104	&	411	&	0.090000	&	pds-06	&	9119	&	28435	&	0.490000	\\
ganges	&	840	&	1197	&	0.030000	&	pds-10	&	15587	&	48719	&	1.240000	\\
gfrd-pnc&	590	&	1134	&	0.020000	&	pds-20	&	32287	&	106080	&	5.440000	\\
greenbea&	1872	&	4081	&	0.070000	&	BL	&	5468	&	12038	&	0.830000	\\
greenbeb&	1865	&	4065	&	0.090000	&	BL2	&	5480	&	12063	&	0.840000	\\
grow15	&	300	&	645	&	0.010000	&	CO5	&	4471	&	10318	&	0.240000	\\
grow22	&	440	&	946	&	0.020000	&	CO9	&	8510	&	19276	&	0.470000	\\
grow7	&	140	&	301	&	0.000001	&	CQ9	&	7073	&	17806	&	0.300000	\\
israel	&	166	&	307	&	0.010000	&	GE	&	8361	&	14096	&	0.200000	\\
kb2	&	43	&	68	&	0.000001	&	NL	&	6478	&	14393	&	0.560000	\\
lotfi	&	117	&	329	&	0.000001	&	a1	&	42	&	73	&	0.000001	\\
maros	&	626	&	1365	&	0.030000	&	fort45	&	1037	&	1467	&	0.060000	\\
maros-r7&	2152	&	6578	&	0.090000	&	fort46	&	1037	&	1467	&	0.060000	\\
modszk1	&	658	&	1405	&	0.010000	&	fort47	&	1037	&	1467	&	0.050000	\\
nesm	&	646	&	2850	&	0.080000	&	fort48	&	1037	&	1467	&	0.060000	\\
perold	&	580	&	1412	&	0.020000	&	scagr25	&	344	&	543	&	0.020000	\\
pilot	&	1350	&	4506	&	0.090000	&	fort49	&	1037	&	1467	&	0.050000	\\
pilot4	&	389	&	1069	&	0.030000	&	fort51	&	1042	&	1473	&	0.060000	\\
pilot87	&	1968	&	6367	&	0.220000	&	fort52	&	1041	&	1471	&	0.060000	\\
pilot\_ja&	795	&	1834	&	0.040000	&	fort53	&	1041	&	1471	&	0.050000	\\
pilot\_we&	691	&	2621	&	0.050000	&	fort54	&	1041	&	1471	&	0.050000	\\
pilotnov&	830	&	2089	&	0.050000	&	fort55	&	1041	&	1471	&	0.060000	\\
recipe	&	61	&	120	&	0.000001	&	fort56	&	1041	&	1471	&	0.040000	\\
sc105	&	104	&	162	&	0.000001	&	fort57	&	1041	&	1471	&	0.060000	\\
sc205	&	203	&	315	&	0.000001	&	fort58	&	1041	&	1471	&	0.050000	\\
sc50a	&	49	&	77	&	0.000001	&	fort59	&	1041	&	1471	&	0.050000	\\
sc50b	&	48	&	76	&	0.000001	&	fort60	&	1041	&	1471	&	0.060000	\\
scagr25	&	344	&	543	&	0.010000	&	fort61	&	1041	&	1471	&	0.050000	\\
scagr7	&	92	&	147	&	0.000001	&	x1	&	983	&	1413	&	0.050000	\\
scfxm1	&	268	&	526	&	0.020000	&	x2	&	983	&	1413	&	0.050000	\\
scfxm2	&	536	&	1052	&	0.030000	&	pata01	&	122	&	1241	&	0.010000	\\
scfxm3	&	804	&	1578	&	0.040000	&	pata02	&	122	&	1241	&	0.020000	\\
scorpion&	180	&	239	&	0.060000	&	patb01	&	57	&	143	&	0.000001	\\
scrs8	&	418	&	1183	&	0.080000	&	patb02	&	57	&	143	&	0.000001	\\
scsd1	&	77	&	760	&	0.260000	&	vschna02&	122	&	1363	&	0.010000	\\
scsd6	&	148	&	1350	&	0.430000	&	vschnb01&	57	&	144	&	0.000001	\\
scsd8	&	397	&	2750	&	0.070000	&	vschnb02&	58	&	202	&	0.000001	\\
sctap1	&	269	&	608	&	0.010000	&	willett	&	184	&	588	&	0.030000	\\
sctap2	&	977	&	2303	&	0.010000	&	ex01	&	234	&	1555	&	0.070000	\\
sctap3	&	1346	&	3113	&	0.020000	&	ex02	&	226	&	1547	&	0.070000	\\
seba	&	2	&	9	&	0.130000	&	ex05	&	831	&	7747	&	0.090000	\\
share1b	&	107	&	243	&	0.040000	&	ex06	&	824	&	7778	&	0.090000	\\
share2b	&	92	&	158	&	0.010000	&	ex09	&	1818	&	18120	&	0.340000	\\
shell	&	487	&	1450	&	0.020000	&		&		&		&		        \\
\hline
\end{tabular}}
\end{table}




\subsection{Computational Results}

Table \ref{tb06} shows the relative gain by each algorithm in all problems considering the five different times, i.e., the percentage of problems where the algorithm obtained the lowest value for the residual $||b^k||$, in the times $t1$ up to $t5$.

\begin{table}[h!]
\centering
\caption{Percentage of the relative gain by the algorithms on the problems in five different times}
\label{tb06}
{\scriptsize
{\begin{tabular}{@{}|l|c|c|c|c|c|c|}
\hline
{\bf Algorithm} & {\bf t1} & {\bf t2} & {\bf t3} & {\bf t4} & {\bf t5} \\
\hline
Algorithm with p=2  &{ 15.89\%} & 11.92\%   & 22.51\%  &  5.96\%	&  7.94 \% \\
Algorithm with p=4  &{ 31.12\%} & 34.43\%   & 27.81\%  & 19.96\%	& 23.83 \% \\
Algorithm with p=10 &{ 20.52\%} & 27.15\%   & 25.16\%  & 25.16\% 	& 23.84 \%\\
Algorithm with p=20 &{ 32.45\%} & 26.49\%   & 24.50\%  & 50.99\% 	& 44.37\%\\
\hline
\end{tabular}}}
\end{table}

According to Table \ref{tb06}, the family of algorithms is superior to OPAA, when $p = 2$. In the times $t4$ and $t5$, it can be noted that the increase in $p$, within moderate values, clearly improves the behavior of the algorithm.

The performances of the algorithms using performance profile \cite{DOMO2002} was also analyzed. The distance of the residual $||b^k||$ to the origin was used to measure the performance. In these graphs, $\rho(1)$ represents the algorithm's total gain. The best performance algorithms are the ones above the others in graphics.

In Figure \ref{fgp241020}, the performance profile of the four algorithms with time $t5$ are compared. It shows that the three family algorithms are superior to OPAA, both in terms of efficiency and robustness, within moderate $p$ values. Table \ref{tb06} shows the algorithms efficiency in the $t5$ column. The efficiency shows itself by the three family algorithms curves above the OPAA curve.

Theorem \ref{tttu1} states that theoretically, a higher number of columns in the subproblem will not result in a low efficiency algorithm, i.e., if we use an algorithm with $p_1$ coordinates, and another one with $p_1\leq p_2$, then the algorithm with $p_2$ coordinates, in the worst case, will have an efficiency equal to the algorithm with $p_1$ coordinates. Figure \ref{fgp241020} shows that, in practice, for moderate $p$ values, the algorithm with $p_2$ coordinates becomes more efficient and also robust.


\begin{figure}[ht]
\centerline{\includegraphics[width=12.5cm,height=4.0cm]{figp2p4p10p20.eps}}
\caption{Performance profile of four algorithms in t5 time.}
\label{fgp241020}
\end{figure}


Figures \ref{figp220}, \ref{figp210}, and \ref{figp24} show a comparison of performance profiles among family algorithms for $p = 20$, $p = 10$, and $p = 4$ and OPAA at the time $t5$. The three figures show that the three family algorithms are more efficient and robust. The highest efficiency was achieved by the algorithm with $p = 4$, which had $88\%$ efficiency, followed by the algorithm with $p = 10$, which had $84\%$ efficiency, and ending with the algorithm with $p = 20$, which had a $74\%$ efficiency. Thus, at time $t5$, the family algorithms achieved higher efficiency when compared with OPAA. As for robustness, the three figures show that the curves from the three family algorithms are on top of the curve from the OPAA, thus demonstrating their strength.

\begin{figure}[ht]
\centerline{\includegraphics[width=12.5cm,height=3.5cm]{figp2p20.eps}}
\caption{Performance profile of the algorithms with $p=2$ and $p=20$ in time $t5.$}
 \label{figp220}
\end{figure}

\begin{figure}[ht]
\centering
\includegraphics[width=12.5cm,height=3.5cm]{figp2p10.eps}
 \caption{Performance profile of the algorithms with $p=2$ and $p=10$ in time $t5.$}
 \label{figp210}
\end{figure}

\begin{figure}[ht]
\centering
\includegraphics[width=12.5cm,height=3.5cm]{figp2p4.eps}
 \caption{Performance profile of the algorithms with $p=2$ and $p=4$ in time $t5.$}
 \label{figp24}
\end{figure}

Thus, the performance profiles show that the family of algorithms has good performance for moderate $p$ values.

When comparing the family algorithms with $p = 20$, $p = 10$, and $p = 4$, and the OPAA in other times, the best efficiency was obtained in time $t4$ with $76\%$, $77\%$, and $83\%$ for $p = 20$, $p = 10$ and $p = 4$, respectively. Followed by the times $t2$, $t1$ with $62\%$, $73\%$; $81\%$, $56\%$; and $62\%$, $72\%$, respectively, and ending the time $t3$ with efficiencies of $54\%$, $60\%$, and $68\%$. Thus, the family of algorithms was also superior in all times used in the experiments.

\section{Conclusions}

The family of simple algorithms arose from the generalization of the OPAA. The major advantage of this family of algorithms is its simplicity and fast initial convergence. This paper presents a comparison between a family of simple algorithms for linear programming and the OPAA. Besides, it is proved that the algorithm with $ p = 1 $ is equivalent to von Neumann's algorithm. Finally, sufficient conditions show that the family of algorithms has better performance than von Neumann's algorithm.

The experiments are performed in a similar framework as reported by in Gon\c alves, Storer and Gondzio in \cite{G2}. The computational results show the superiority of the family of algorithms for moderate $p$ values, in comparison with OPAA. Performance profile graphs indicate that the family of algorithms has significantly more efficiency and robustness. In particular, for moderate $p$ values, Figure \ref{fgp241020} clearly shows in practice Theorem \ref{tttu1}'s statement; the $p$ value increase is followed by the increase in efficiency and robustness in the adjustment algorithm for the $p$ coordinates.

Despite the improvements with respect to OPAA, the family of algorithms is not practical for solving linear programming problems up to an solution. However, it can be useful in some situations, such as, improving the starting point for interior point methods as in \cite{Ca1}, or to work in combination with interior point methods using its initial fast convergence rate as reported in \cite{Ca2}. Nevertheless, future researches are needed to measure the impact that the family of algorithms can have in this direction.\\

\footnotesize{\noindent\textbf{Acknowledgement}\quad
This work was partially funded by the Foundation for the Support of Research of the State of S\~ao Paulo (FAPESP) and Brazilian Council for the Development of Science and Technology (CNPq).}



\begin{abstract}
{\bf Resumo}. Este artigo apresenta uma comparação entre uma família de algoritmos simples para programação linear
e o algoritmo de ajustamento pelo par ótimo. O algoritmo de ajustamento pelo par ótimo foi desenvolvido
para melhorar a convergência do algoritmo de von Neumann que é um algoritmo muito interressante por causa de sua simplicidade. Porém não é muito
prático resolver problemas de programação linear até a otimalidade com ele, visto que sua convergência ainda é muito lenta. A família de algoritmos simples surgiu
da generalização do algoritmo de ajustamento pelo para ótimo, incluindo um parâmetro sobre o número de colunas escolhidas, em vez de manter fixa duas. Esta generalização
preserva a simplicidade dos algoritmos e suas boas qualidades. Apresentamos experimentos numéricos sobre um conjunto de problemas de programação linear que mostram
melhorias significativas em relação ao algoritmo de ajustamento pelo par ótimo.

{\bf Palavras-chave}. Programação Linear, Algoritmo de von Neumann, Algoritmos Simples.
\end{abstract}

\bibliographystyle{ieeetr}
\bibliography{artigotema}

\end{document}
