Author | Message |
---|---|
lmann2
Posts: 156
|
Posted 11:25 Feb 07, 2016 |
Can you tell the difference between these steps? What is the vector that you're talking about? To me the scale features means converting all the values to a range between -1 and 1 to normalize the data which you can do by applying a fucntion to each column in a single step (ie. value - avg / range). 2pts: Implement scale_features as per lecture, it should take one training example and return a vector of scaled and mean-normalized values. 2pts: Replace all values in the dataframe with scaled and mean-normalized values. |
khsu
Posts: 30
|
Posted 11:32 Feb 07, 2016 |
The scale_feature is a function that takes in a training example(a row from the dataset) and returns the normalized values for that row in a vector form. E.g, input - [3 ,100 ,351, 331] output would be something like [0.3, 0.71, 0.5, 0.24] Replace all values in the dataframe is when you run scale_feature for each row of the dataframe, and set the values for those rows in the dataframe as the vector returned from your scale_feature function. You can include the latter in scale_features function as well. The scale_feature function is meant for getting the normalized vector for ONE training example, as the input indicates it to be. I imagine we will need it again in the future.
P.S. Of course it doesn't necessary have to be done row by row, you can also replace the values column by column as long it gets the job done. But 'one training set example' is a row from the dataframe. Last edited by khsu at
11:36 Feb 07, 2016.
|
lmann2
Posts: 156
|
Posted 11:39 Feb 07, 2016 |
One more: We aren't actually passing any values (just writing functions) until this last step listed here, correct? 2pts: Implement cost_function as per lecture/Ng videos: it takes the above numpy matrix and returns a cost value. 2pts: Implement gradient as per lecture/Ng videos. 3pts: Implement gradient_descent as per lecture/Ng videos. It should return a vector of trained parameters. It should take in a numpy array, num_iter (number of iterations), and an alpha (learning rate). 2pts: Run gradient_descent. Calculate the cost function values on the normalized data before and after, print these values. |
lmann2
Posts: 156
|
Posted 11:55 Feb 07, 2016 |
. Last edited by lmann2 at
11:56 Feb 07, 2016.
|
vsluong4
Posts: 87
|
Posted 21:09 Feb 07, 2016 |
When completed, the actual call to all of your function only happens on one of the last few steps, but it wouldn't hurt to pass in some test values to make sure it at least kind of works before preceding |