Ludwig Schmidt (University of Washington)
https://simons.berkeley.edu/talks/lud...
Large Language Models and Transformers
Researchers have proposed many methods to make neural networks more reliable under distribution shift, yet there is still large room for improvement. Are better training algorithms or training data the more promising way forward? In this talk, we study this question in the context of OpenAI's CLIP model for learning from image-text data.
First, we survey the current robustness landscape based on a large-scale experimental study involving more than 200 different models and test conditions. The CLIP models stand out with unprecedented robustness on multiple challenging distribution shifts. To further improve CLIP, we then introduce new methods for reliably fine-tuning models by interpolating the weights of multiple models. Next, we investigate the cause of CLIP's robustness via controlled experiments to disentangle the influence of language supervision and training distribution. While CLIP leveraged large scale language supervision for the first time, its robustness actually comes from the pre-training dataset.
We conclude with an overview of ongoing work to improve pre-training datasets: LAION-5B, the largest public image-text dataset, and initial experiments to increase the robustness induced by pre-training data (DataComp).
Смотрите видео A data-centric view on reliable generalization: From ImageNet to LAION-5B онлайн без регистрации, длительностью часов минут секунд в хорошем качестве. Это видео добавил пользователь Simons Institute 01 Январь 1970, не забудьте поделиться им ссылкой с друзьями и знакомыми, на нашем сайте его посмотрели 3,282 раз и оно понравилось 92 людям.