Monitoring and Quality Assurance of Complex ML Deployments via Assertions - with Stanford Dawn Lab

Published: 27 October 2021
on channel: Scale AI
760
9

Machine Learning (ML) is increasingly being deployed in complex situations by teams. While much research effort has focused on the training and validation stages, other parts have been neglected by the research community. In this talk, Daniel Kang will describe two abstractions (model assertions and learned observation assertions) that allow users to input domain knowledge to find errors at deployment time and in labeling pipelines. He will show real-world errors in labels and ML models deployed in autonomous vehicles, visual analytics, and ECG classification that these abstractions can find. I'll further describe how they can be used to improve model quality by up to 2x at a fixed labeling budget. This work is being conducted jointly with researchers from Stanford University and Toyota Research Institute.


Watch video Monitoring and Quality Assurance of Complex ML Deployments via Assertions - with Stanford Dawn Lab online without registration, duration hours minute second in high quality. This video was added by user Scale AI 27 October 2021, don't forget to share it with your friends and acquaintances, it has been viewed on our site 760 once and liked it 9 people.