Date of Award

2017

Document Type

Open Access Dissertation

Department

Mathematics

Sub-Department

College of Arts and Sciences

First Advisor

Vladimir Temlyakov

Abstract

In this dissertation we study the questions of convergence and rate of convergence of greedy-type algorithms under imprecise step evaluations. Such algorithms are in demand as the issue of calculation errors appears naturally in applications.

We address the question of strong convergence of the Chebyshev Greedy Algorithm (CGA), which is a generalization of the Orthogonal Greedy Algorithm (also known as the Orthogonal Matching Pursuit), and show that the class of Banach spaces for which the CGA converges for all dictionaries and objective elements is strictly between smooth and uniformly smooth Banach spaces.

We analyze an application-oriented modification of the CGA, the generalized Approximate Chebyshev Greedy Algorithm (gAWCGA), in which we are allowed to perform every operation of the algorithm with a controlled inaccuracy in the form of both relative and absolute errors. Such permission is essential for numerical applications and simplifies realization of the algorithm. We obtain necessary and sufficient conditions for the convergence of the gAWCGA in all uniformly smooth Banach spaces, all dictionaries and all elements.

Greedy algorithms in convex optimization have been of particular interest recently. We discuss algorithms that do not use the derivative of the objective function, and thus offer an alternative to traditional methods of convex minimization. We recall two known algorithms: the Relaxed E-Greedy Algorithm (REGA(co)) and the E-Greedy Algorithm with Free Relaxation (EGAFR(co)), and introduce the Rescaled Relaxed E-Greedy Algorithm for convex optimization (RREGA(co)), which is computationally simpler than the EGAFR(co) and does not suffer the limitations of the REGA(co).

Rights

© 2017, Anton Dereventsov

Included in

Mathematics Commons

Share

COinS