Tea Time Talks are back for another year. This summer lecture series, presented by Amii and the RLAI Lab at the University of Alberta, give researchers the chance to discuss early-stage ideas and prospective research. Join us for another series of informal 20-minute talks where AI leaders discuss the future of machine learning research.
Abstract:
Online learning has emerged as a focal point of numerous research efforts in recent years. Despite ongoing debates and ambiguities surrounding its precise definition, the widely used Backpropagation algorithm is known to face several significant challenges in various online learning scenarios. These challenges include scalability issues, dependence on stationary or independent and identically distributed (I.i.d.) data, and the requirement for triple locking phases for forward, backward, and update passes, which results in high latency. Additionally, Backpropagation necessitates symmetric weights in forward and backward passes, is unable to perform parallel updates, and demonstrates low utility and throughput relative to the computational capacity of the system, along with a lack of biological plausibility.
This work briefly explores other categories of learning solutions for credit assignment in neural networks, highlighting their insufficiencies. Subsequently, we introduce an asynchronous local learning algorithm called Asynchronous Propagation (AProp), which exploits phenomena emerging from decentralized updates of layers. We present the intuitive foundation of how AProp can be utilized to enable parallel updates across neural network layers, potentially overcoming some of the limitations associated with Backpropagation in online learning.
Watch video Tea Time Talks: Farzane Aminmansour, AProp: Decentralized Gradient-Based Learning Algorithm for DNNs online without registration, duration hours minute second in high quality. This video was added by user Amii 27 September 2024, don't forget to share it with your friends and acquaintances, it has been viewed on our site 4 once and liked it people.