Abstract
Following their conception in the mid twentieth century, the world of computers has evolved from a landscape of isolated entities into a sprawling web of interconnected machines. Yet, given this evolution, many of the methods we use for allowing computers to work together still reflect their inherently isolated origins with the aggregation of data or masterslave relationships still commonly seeing use. While sufficient for some types of applications, these approaches do not naturally reflect the collaboration strategies we observe in nature and so the question is raised as to whether we can do better?
In parallel to the improvements in computer to computer communication, the emergence of new paradigms such as the Internet of Things (IoT), Big Data processing and cloud computing in recent years has placed an increasing importance on networked systems in many facets of the modern world. From power grid management, to autonomous vehicle navigation, to even our basic means of interaction through social media, these networks are a pervasive presence in our day to day lives. The vast amounts of data generated by these networks and their ever increasing sizes makes it impractical if not impossible to resort to traditional centralized processing and therefore necessitates the search for new methods of signal processing within networked systems.
In this thesis we approach the task of distributed signal processing by exploiting the synergy between such tasks and equivalent convex optimization problems. Specifically, we focus on the task of distributed convex optimization, that of solving optimization problems involving groups of computers in a collaborative manner and the development of distributed solvers for such tasks. Such solvers distinguish themselves by only allowing local computations at each computer in a network and the exchange of information between connected computers. In this way, distributed solvers naturally respect the structure of the underlying network in which they are deployed.
In the pursuit of our goal, we approach the task of distributed solver design via the lens of monotone operator theory. Providing a well known platform for the derivation of many first order convex solvers, herein we demonstrate the use of this theory as a means of constructing and analyzing a number of algorithms for distributed optimization. The first major contribution of this thesis lies in the analysis and understanding of an existing algorithm for distributed optimization within the literature termed the primal dual method of multipliers (PDMM). In particular, by demonstrating a novel interpretation of PDMM from the perspective of monotone operator theory we are able to better understand its convergent characteristics and highlight sufficient conditions for which PDMM will converge at a geometric rate. Furthermore we quantify the impact that network topology has on these convergence rates, drawing a direct connection between spectral characteristics of networks and distributed optimization.
Secondly, we explored the space of solver design by proposing novel algorithms for distributed networks. For the family of separable optimization problems, those with separable objectives and constraints, we demonstrated a distributed solver design using a specific lifted dual form. Based on monotone operator theory, the convergence analysis of the proposed method followed naturally from well known results and broadened the class of distributable problems compared to the likes of PDMM. Furthermore, in the case of timevarying consensus problems, we again proposed a new algorithm by combining a network dependent metric choice with classic operator splitting methods. Again the monotone basis of this algorithm facilitated the convergence analysis of this method which empirically was also shown to converge for general closed, convex and proper functions.
Finally, we demonstrated how these methods could be used for practical distributed signal processing in networks by considering the case of multichannel speech enhancement in wireless acoustic sensor networks. By combining a particular modeling of the acoustic scene with the algorithms mentioned above, the proposed method was not only distributable but also offered increased resilience to steering vector mismatch than other standard approaches. This example also highlights the importance of understanding both the target application and the distributed solvers themselves in developing effective solutions.
Overall, this thesis provides a first foray into the world of distributed optimization via the lens of monotone operator theory. We feel that this perspective provides an ideal reference for the analysis of such algorithms while also providing a general framework for convex optimization solver design in turn. While this thesis is not the end of this branch of research, it indicates the potential of the monotone operator theory as a unifying method for the development and analysis of distributed optimization solutions.
In parallel to the improvements in computer to computer communication, the emergence of new paradigms such as the Internet of Things (IoT), Big Data processing and cloud computing in recent years has placed an increasing importance on networked systems in many facets of the modern world. From power grid management, to autonomous vehicle navigation, to even our basic means of interaction through social media, these networks are a pervasive presence in our day to day lives. The vast amounts of data generated by these networks and their ever increasing sizes makes it impractical if not impossible to resort to traditional centralized processing and therefore necessitates the search for new methods of signal processing within networked systems.
In this thesis we approach the task of distributed signal processing by exploiting the synergy between such tasks and equivalent convex optimization problems. Specifically, we focus on the task of distributed convex optimization, that of solving optimization problems involving groups of computers in a collaborative manner and the development of distributed solvers for such tasks. Such solvers distinguish themselves by only allowing local computations at each computer in a network and the exchange of information between connected computers. In this way, distributed solvers naturally respect the structure of the underlying network in which they are deployed.
In the pursuit of our goal, we approach the task of distributed solver design via the lens of monotone operator theory. Providing a well known platform for the derivation of many first order convex solvers, herein we demonstrate the use of this theory as a means of constructing and analyzing a number of algorithms for distributed optimization. The first major contribution of this thesis lies in the analysis and understanding of an existing algorithm for distributed optimization within the literature termed the primal dual method of multipliers (PDMM). In particular, by demonstrating a novel interpretation of PDMM from the perspective of monotone operator theory we are able to better understand its convergent characteristics and highlight sufficient conditions for which PDMM will converge at a geometric rate. Furthermore we quantify the impact that network topology has on these convergence rates, drawing a direct connection between spectral characteristics of networks and distributed optimization.
Secondly, we explored the space of solver design by proposing novel algorithms for distributed networks. For the family of separable optimization problems, those with separable objectives and constraints, we demonstrated a distributed solver design using a specific lifted dual form. Based on monotone operator theory, the convergence analysis of the proposed method followed naturally from well known results and broadened the class of distributable problems compared to the likes of PDMM. Furthermore, in the case of timevarying consensus problems, we again proposed a new algorithm by combining a network dependent metric choice with classic operator splitting methods. Again the monotone basis of this algorithm facilitated the convergence analysis of this method which empirically was also shown to converge for general closed, convex and proper functions.
Finally, we demonstrated how these methods could be used for practical distributed signal processing in networks by considering the case of multichannel speech enhancement in wireless acoustic sensor networks. By combining a particular modeling of the acoustic scene with the algorithms mentioned above, the proposed method was not only distributable but also offered increased resilience to steering vector mismatch than other standard approaches. This example also highlights the importance of understanding both the target application and the distributed solvers themselves in developing effective solutions.
Overall, this thesis provides a first foray into the world of distributed optimization via the lens of monotone operator theory. We feel that this perspective provides an ideal reference for the analysis of such algorithms while also providing a general framework for convex optimization solver design in turn. While this thesis is not the end of this branch of research, it indicates the potential of the monotone operator theory as a unifying method for the development and analysis of distributed optimization solutions.
Original language  English 

Awarding Institution 

Supervisors/Advisors 

Award date  3 Jun 2019 
Print ISBNs  9789463840415 
DOIs  
Publication status  Published  2019 
Keywords
 Distributed Signal Processing
 Convex Optimization
 Monotone Operator Theory
 Wireless Sensor Networks