A Google TechTalk, presented by Peter I. Frazier, 2021/06/08
ABSTRACT: Bayesian optimization is a powerful tool for optimizing time-consuming-to-evaluate non-convex derivative-free objective functions. While BayesOpt has historically been deployed as a black-box optimizer, recent advances show considerable gains by "peeking inside the box". For example, when tuning hyperparameters in deep neural networks to minimize validation error, state-of-the-art BayesOpt tuning methods leverage the ability to stop training early, restart previously paused training, perform training and testing on a strict subset of the available data, and warm-start from previously tuned network architectures. We describe new "grey box" Bayesian optimization methods that selectively exploit problem structure to deliver state-of-the-art performance. We then briefly describe applications of these methods to tuning deep neural networks, inverse reinforcement learning and calibrating physics-based simulators to observational data.
About the speaker: Peter Frazier is the Eleanor and Howard Morgan Professor of Operations Research and Information Engineering at Cornell University. He is also a Staff Data Scientist at Uber. He leads Cornell's COVID-19 Mathematical Modeling Team, which designed Cornell's testing strategy to support safe in-person education during the pandemic. His academic research during more ordinary times is in Bayesian optimization, incentive design for social learning and multi-armed bandits. At Uber, he managed UberPool's data science group and currently helps to design Uber's pricing systems.
Watch video Grey-box Bayesian Optimization by Peter Frazier online without registration, duration hours minute second in high quality. This video was added by user Google TechTalks 13 July 2021, don't forget to share it with your friends and acquaintances, it has been viewed on our site 2,240 once and liked it 49 people.