Papers by Jose Mario Martinez

Journal of Optimization Theory and Applications, 2000
We introduce a new model algorithm for solving nonlinear programming problems. No slack variables... more We introduce a new model algorithm for solving nonlinear programming problems. No slack variables are introduced for dealing with inequality constraints. Each iteration of the method proceeds in two phases. In the first phase, feasibility of the current iterate is improved; in second phase, the objective function value is reduced in an approximate feasible set. The point that results from the second phase is compared with the current point using a nonsmooth merit function that combines feasibility and optimality. This merit function includes a penalty parameter that changes between consecutive iterations. A suitable updating procedure for this penalty parameter is included by means of which it can be increased or decreased along consecutive iterations. The conditions for feasibility improvement at the first phase and for optimality improvement at the second phase are mild, and large-scale implementation of the resulting method is possible. We prove that, under suitable conditions, which do not include regularity or existence of second derivatives, all the limit points of an infinite sequence generated by the algorithm are feasible, and that a suitable optimality measure can be made as small as desired. The algorithm is implemented and tested against the LANCELOT algorithm using a set of hard-spheres problems.
Mathematical Programming, 1999
A model algorithm based on the successive quadratic programming method for solving the general no... more A model algorithm based on the successive quadratic programming method for solving the general nonlinear programming problem is presented. The objective function and the constraints of the problem are only required to be differentiable and their gradients to satisfy a Lipschitz condition. The strategy for obtaining global convergence is based on the trust region approach. The merit function is a type of augmented Lagrangian. A new updating scheme is introduced for the penalty parameter, by means of which monotone increase is not necessary. Global convergence results are proved and numerical experiments are presented.
Generalized inverses and a new stable secant type minimization algorithm
Lecture Notes in Control and Information Sciences
Without Abstract
SIAM Journal on Optimization, 1994
We introduce a new method for maximizing a concave quadratic function with bounds on the variable... more We introduce a new method for maximizing a concave quadratic function with bounds on the variables. The new algorithm combines conjugate gradients with gradient projection techniques, as the algorithm of Mor e and Toraldo (SIAM J. on Optimization 1, pp. 93-113) and other well-known methods do. A new strategy for the decision of leaving the current face is introduced, that makes it possible to obtain nite convergence even for a singular Hessian and in the presence of dual degeneracy. We present numerical experiments.
Computing Supplementa, 2001
A practical algorithm for box-constrained optimization is introduced. The algorithm combines an a... more A practical algorithm for box-constrained optimization is introduced. The algorithm combines an active-set strategy with spectral projected gradient iterations. In the interior of each face a strategy that deals efficiently with negative curvature is employed. Global convergence results are given. Numerical results are presented.
Comparing Algorithms for Solving Sparse Nonlinear Systems of Equations
SIAM Journal on Scientific and Statistical Computing, 1992
Comparing Algorithms for Solving Sparse Nonlinear Systems of Equations. [SIAM Journal on Scientif... more Comparing Algorithms for Solving Sparse Nonlinear Systems of Equations. [SIAM Journal on Scientific and Statistical Computing 13, 459 (1992)]. Márcia A. Gomes-Ruggiero, José Mario Martínez, Antonio Carlos Moretti. Abstract. ...
Numerical Linear Algebra with Applications, 2002
The search direction in unconstrained minimization algorithms for large‐scale problems is usually... more The search direction in unconstrained minimization algorithms for large‐scale problems is usually computed as an iterate of the preconditioned) conjugate gradient method applied to the minimization of a local quadratic model. In line‐search procedures this direction is required to satisfy an angle condition that says that the angle between the negative gradient at the current point and the direction is bounded away from π/2. In this paper, it is shown that the angle between conjugate gradient iterates and the negative gradient strictly increases as far as the conjugate gradient algorithm proceeds. Therefore, the interruption of the conjugate gradient sub‐algorithm when the angle condition does not hold is theoretically justified. Copyright © 2002 John Wiley & Sons, Ltd.
Numerical Functional Analysis and Optimization, 1995
Numerical Functional Analysis and Optimization, 2000

Numerical Algorithms, 2010
Interior-point algorithms are nowadays among the most efficient techniques for processing monoton... more Interior-point algorithms are nowadays among the most efficient techniques for processing monotone complementarity problems. In this paper, a procedure for globalizing interiorpoint methods by using the maximum stepsize is introduced. The algorithm combines exact or inexact interior-point and projected-gradient search techniques and employs a line-search procedure for the natural merit function associated to the complementarity problem. Furthermore for linear complementarity problems, the maximum stepsize is shown to be accepted in all iterations employing the exact interior-point search direction. A number of classes of complementarity and optimization problems are discussed, which the algorithm is able to process by either finding a solution or showing that no solution exists. A modification of the algorithm for dealing with infeasible linear complementarity problems is introduced, which solely employs interior-point search directions in practice. Computational experience on the solution of complementarity problems and linear and convex quadratic programs by the new algorithm is included, which illustrates the efficiency of this methodology.
Nonlinear Analysis: Theory, Methods & Applications, 2000
Mathematics of Computation, 1992
In this paper, we show that the main results of the local convergence theory for least-change sec... more In this paper, we show that the main results of the local convergence theory for least-change secant update methods of Dennis and Walker (SIAM J. Numer. Anal. 18 (1981), 949-987) can be proved using the theory introduced recently by Martinez (Math. Comp. 55 (1990), 143-167). In addition, we exhibit two generalizations of well-known methods whose local convergence can be easily proved using Martínez’s theory.
Mathematical Programming, 2010
Complementarity problems may be formulated as nonlinear systems of equations with non-negativity ... more Complementarity problems may be formulated as nonlinear systems of equations with non-negativity constraints. The natural merit function is the sum of squares of the components of the system. Sufficient conditions are established which guarantee that stationary points are solutions of the complementarity problem. Algorithmic consequences are discussed.
Mathematical Methods of Operations Research, 2005
Order-value optimization (OVO) is a generalization of the minimax problem motivated by decision-m... more Order-value optimization (OVO) is a generalization of the minimax problem motivated by decision-making problems under uncertainty and by robust estimation. New optimality conditions for this nonsmooth optimization problem are derived. An equivalent mathematical programming problem with equilibrium constraints is deduced. The relation between OVO and this nonlinear-programming reformulation is studied. Particular attention is given to the relation between local minimizers and stationary points of both problems.

Journal of Chemical Theory and Computation, 2013
Large-scale electronic structure calculations usually involve huge nonlinear eigenvalue problems.... more Large-scale electronic structure calculations usually involve huge nonlinear eigenvalue problems. A method for solving these problems without employing expensive eigenvalue decompositions of the Fock matrix is presented in this work. The sparsity of the input and output matrices is preserved at every iteration and the memory required by the algorithm scales linearly with the number of atoms of the system. The algorithm is based on a projected gradient iteration applied to the constraint fulfillment problem. The computer time required by the algorithm also scales approximately linearly with the number of atoms (or non-null elements of the matrices), and the algorithm is faster than standard implementations of modern eigenvalue decomposition methods for sparse matrices containing more than 50,000 non-null elements. The new method reproduces the sequence of semiempirical SCF

The Journal of Chemical Physics, 2004
As far as more complex systems are being accessible for quantum chemical calculations, the reliab... more As far as more complex systems are being accessible for quantum chemical calculations, the reliability of the algorithms used becomes increasingly important. Trust-region strategies comprise a large family of optimization algorithms that incorporates both robustness and applicability for a great variety of problems. The objective of this work is to provide a basic algorithm and an adequate theoretical framework for the application of globally convergent trust-region methods to electronic structure calculations. Closed shell restricted Hartree–Fock calculations are addressed as finite-dimensional nonlinear programming problems with weighted orthogonality constraints. A Levenberg–Marquardt-like modification of a trust-region algorithm for constrained optimization is developed for solving this problem. It is proved that this algorithm is globally convergent. The subproblems that ensure global convergence are easy-to-compute projections and are dependent only on the structure of the con...

Augmented lagrangians and sphere packing problems
International Journal of Computer Mathematics, 1998
This paper has two main objectives. Firstly, the performance of a recently developed Nonlinear Pr... more This paper has two main objectives. Firstly, the performance of a recently developed Nonlinear Programming algorithm based on Augmented Lagrangians on a family of optimization problems related to the classical Kissing Problem is reported. The test problems are easily defined in purely mathematical terms and its reproduction using standard scientific languages does not seem to be prone to human errors. Therefore, the figures presented here can be useful to developers of other optimization algorithms. Secondly, the practical results on the geometrical problems described in this paper have an independent interest from a large variety of perspectives, including the historical one. In particular, there exist a large number of related open mathematical problems, and the ability of dealing with them numerically should help to their rigorous elucidation.
International Journal of Computer Mathematics, 1997
Non-monotone spectral projected gradient method applied to full waveform inversion
Geophysical Prospecting, 2006
... Correspondence: Noam Zeev, Olga Savasta,. *Correspondence: *E-mails: zeevn@cantv.net, savasta... more ... Correspondence: Noam Zeev, Olga Savasta,. *Correspondence: *E-mails: zeevn@cantv.net, savastao@cantv.net, cores@cesma.usb.ve. ... E., Cao D., Koren Z., Landa E., Mendes M., Pica A., Noble M., Roeth G., Singh S., Snieder R., Tarantola A., Trezeguet D. and Xie M. 1989. ...
Computing, 1990
An Algorithm for Solving Nonlinear Least-Squares Problems with a New Curvilinear Search. We propo... more An Algorithm for Solving Nonlinear Least-Squares Problems with a New Curvilinear Search. We propose a modification of an algorithm introduced by Martiuez (1987) for solving nonlinear least-squares problems. Like in the previous algorithm, after the calculation of an approximated Gauss-Newton direction d, we obtain the next iterate on a two-dimensional subspace which includes d. However, we simplify the process of searching the new point, and we define the plane using a scaled gradient direction, instead of the original gradient. We prove that the new algorithm has global convergence properties. We present some numerical experiments.
Uploads
Papers by Jose Mario Martinez