Distributed Optimization Methods for Large Scale Unconstrained Optimization Problems
Many modern applications require networks of computational agents to solve optimization problems in a cooperative manner, while the growing interest in Big Data and machine learning calls for method that are able to solve problems of increasingly large dimension. This poses many challenges in the field of optimization as most classical methods are not suitable for the distributed framework: new algorithms need to be developed, that are able to deal with the practical limitations deriving from the distributed setting. This thesis focuses on distributed methods for large scale optimization. The contributions of this thesis to the area of distributed optimization are the following. First of all, we extend the convergence analysis of a class of existing first-order distributed methods to the case of time-varying networks and uncoordinated time-varying stepsizes. This extension is particularly relevant as changes in the communication network are common in practical applications due to possible technical failures in the communication and the increasing relevance of mobile sensor networks.One of the main issues for distributed optimization is the selection of the stepsize: classical globalization such as line search are not feasible in the multi-agent framework, while employing a fixed stepsize is known to oftencause the method to be slow and usually requires the knowledge of the regularity constants of the problem, which can be hard to estimate distributedly. To overcome these issues we propose a distributed Inexact Newton method that relies on an adaptive choice of the stepsize, which can be computed distributedly and, under suitable regularity assumptions, ensures both global convergence and local fast convergence as in the centralized case. Systems of linear equations and large dimensions are a problem of interest by itself and as a part of second-order optimization methods. In general, in each iteration of the second-order method, like Inexact Newton method mentioned above, one has to solve a system of linear equations, either approximately or exactly, to get the search direction. Again, distributed environment represents a challenge. Fixed point methods are a known to be very effective in the centralized framework for linear system of large dimensions. We propose here a class of distributed fixed point methods that works in the distributed setting. We show that such methods converge for both static and time-varying network, achieving convergence results analogous to those that can be proved for the centralized case. Finally, in the last part of the thesis we consider the least square problems of very large dimension motivated by digitalization of cadastral maps. The dimension of the problem represent the main difficulty for classical method while the sparse structure presents an opportunity to be exploited in parallel computational framework. We present an Inexact Levenberg-Marquardt method for sparse least squares problem. The method exploits the underlying structure of the problem to define a fixed-point strategy for the computation of the search direction that is suitable for parallelization. Theoretical analysis is presented and we show both global and fast local convergence under the classical assumptions for least squares problems. All presented methods are implemented, and tested on relevant examples. The numerical results reveal that the theoretical results are confirmed by empirical evidence. Furthermore, the proposed methods are compared with the corresponding state-of-the-art methods and their competitiveness is demonstrated.
https://www.cris.uns.ac.rs/DownloadFileServlet/Disertacija166574195113816.pdf?controlNumber=(BISIS)