Scientists at New York University Report Research in Automation Science
Journal of Robotics & Machine Learning
© Copyright 2011 Journal of Robotics & Machine Learning via VerticalNews.com
"We consider the problem of cooperatively minimizing the sum of convex functions, where the functions represent local objective functions of the agents. We assume that each agent has information about his local function, and communicate with the other agents over a time-varying network topology," researchers in New York City, United States report.
"For this problem, we propose a distributed subgradient method that uses averaging algorithms for locally sharing information among the agents. In contrast to previous works on multi-agent optimization that make worst-case assumptions about the connectivity of the agents (such as bounded communication intervals between nodes), we assume that links fail according to a given stochastic process," wrote I. Lobel and colleagues, New York University.
The researchers concluded: "Under the assumption that the link failures are independent and identically distributed over time (possibly correlated across links), we provide almost sure convergence results for our subgradient algorithm."
Lobel and colleagues published their study in IEEE Transactions on Automatic Control (Distributed Subgradient Methods for Convex Optimization Over Random Networks. IEEE Transactions on Automatic Control, 2011;56(6):1291-1306).
For additional information, contact I. Lobel, New York University, Stern School Business, Informat Operat & Management Science Department, New York City, NY 10012, United States.
Publisher contact information for the journal IEEE Transactions on Automatic Control is: IEEE-Institute Electrical Electronics Engineers Inc., 445 Hoes Lane, Piscataway, NJ 08855-4141, USA.
This article was prepared by Journal of Robotics & Machine Learning editors from staff and other reports. Copyright 2011, Journal of Robotics & Machine Learning via VerticalNews.com.