reset password
Author Message
skim144
Posts: 63
Posted 23:46 Feb 08, 2016 |

2pts: Implement gradient as per lecture/Ng videos.

3pts: Implement gradient_descent as per lecture/Ng videos. It should return a vector of trained parameters. It should take in at least a numpy array for the data, num_iter (number of iterations),  and an alpha (learning rate).

 

Can someone clarify the difference between these two steps?

vsluong4
Posts: 87
Posted 23:54 Feb 08, 2016 |

Gradient is the adjustment made to a single theta value to make it closer to the minimum, the formula is somewhere in the notes

 

Gradient descent is the application of all previous steps to find the trained parameters

skim144
Posts: 63
Posted 23:57 Feb 08, 2016 |

Oh I see how prof wants us to divide these steps now. Thanks

vsluong4 wrote:

Gradient is the adjustment made to a single theta value to make it closer to the minimum, the formula is somewhere in the notes

 

 

Last edited by skim144 at 00:10 Feb 09, 2016.
ahnman341
Posts: 22
Posted 00:19 Feb 09, 2016 |

these are my class notes from sunday.

Attachments:
skim144
Posts: 63
Posted 00:27 Feb 09, 2016 |

Thanks ahman341

ahnman341 wrote:

 

these are my class notes from sunday.