Author | Message |
---|---|
306264301
Posts: 28
|
Posted 09:31 Feb 20, 2020 |
Hi, I'm working on part(d). I already did The accuracy after voting. I don't get the part where I have get the AUC. I know that in order to do that I have to calculate the y_predict_probability. I did y_predict_prob = my_DecisionTree.predict_proba(X_test) inside the for loop. I appended every result into a list. I got a binary list of arrays, I don't know that to do with that list. |
mpourhoma
Posts: 39
|
Posted 10:32 Feb 20, 2020 |
Good Question! You can use that formula for part (c). But, For the Bagging (part (d)), I want you to be creative and find a way to define some kind of probability-like metric based on base-learners' predictions. e.g. If you have 5 base learners, 2 of them say YES, 3 of them say NO --> Then, we may say the (pseudo) probability of YES is 2/5! You can calculate it for all testing samples, and later use it as the probability for plotting the ROC.
|