In this video we learn the gradient descent method, which is the next best thing for advanced problems that cannot be solved explicitly. In gradient descent, we calculate the gradient of the loss function with respect to the weights. Then we can slightly adapt the weights against the gradient to decrease the loss. We can repeat this process until reaching a local minimum. Gradient descent is controlled by the learning rate, which is another hyperparameter. As an example, we discuss the least mean squares update rule for linear regression.
Watch video Gradient Descent online without registration, duration hours minute second in high quality. This video was added by user Computational Thinking 16 November 2022, don't forget to share it with your friends and acquaintances, it has been viewed on our site 1,064 once and liked it 24 people.