Bias Variance Tradeoff

Published: 17 December 2018
on channel: Data Talks
1,947
31

In this video, we learn about the second most important tradeoff in Machine Learning: the bias variance tradeoff. Although we are simply rephrasing the results that we got in the previous video, there are two main reasons for doing this:

1. This is the common way of looking at the approximation generalization tradeoff, even though in my opinion it should be the other way around
2. The more ways that you can independently verify your conclusion (a classic probablist/data science way of thinking) the stronger you should feel about it

Link to my notes on Introduction to Data Science: https://github.com/knathanieltucker/d...

Try answering these comprehension questions to further grill in the concepts covered in this video:

1. Where does the randomness come in training an ML algorithm (say linear regression)?
2. Do we ever really know the bias of a model if you only have a sample from a population?
3. Does bias depend on your data sample?
4. Why does bias + variance = test error?
5. When you have a lot of data points are low bias models better than high bias ones?
6. Why does variance always seem to work against us?
7. What is the lowest bias model?


Watch video Bias Variance Tradeoff online without registration, duration hours minute second in high quality. This video was added by user Data Talks 17 December 2018, don't forget to share it with your friends and acquaintances, it has been viewed on our site 1,947 once and liked it 31 people.