代做data编程、代写Python设计程序
Question 1. Explore Model-Based Feature Importance
Throughout this question, you may only use Python. For each sub-question, provide commentary (if
needed) along with screenshots of the code used. Please also provide a copy of the code in your solu tions.py file. For fitting models, always use a random seed (or random state) of 4 for reproducibility.
(a) Generate a dataset of two classes using sklearn.datasets.make classification. It should
have 1000 observations, 20 features. Set 5 of those features to be informative (important), and the
rest as redundant. Be sure to set the shuffle parameter to False, so that the informative features
are listed first. Normalize your data using sklearn.StandardScaler(). Then, fit a decision
tree (using entropy as the criteria for splits) to a shuffled version of the data1 using sklearn.tree
model DecisionTreeClassifier, and using its feature importances method, report how
many of the actually important features are found in the top 5 important features by the deci sion tree. Plot a histogram with x-axis showing the features ranked in decreasing order of im portance, and the y-axis showing the feature importance score. Use a random seed of 0 when
generating the data for reproducibility. Use a random seed of 0 when shuffling the data, you can
use shuffled idxs = np.random.default rng(seed=0).permutation(X.shape[1]).
(b) Provide a detailed explanation of how the feature importance (a.k.a. Gini importance) in the pre vious question are computed; use formulas to explain the exact calculation. Further, answer the
following:
1. What feature importance score is assigned to a feature that is not used for any splits of the tree.
Why?
2. What does a feature importance of 0.15 mean?
(c) In order to obtain a more accurate picture of how good decision trees are at finding important
features, we will repeat the experiment in part (a) a large number of times. Repeat the experiment
a total of 1000 times. In the i-th experiment, use a random seed of i when creating the data set,
where i = 1, 2, . . . . , 1000. For each trial, record how many of the actually important features are
identified. Provide a histogram of this metric over the 1000 trials. What do you think about the
ability of decision trees to pick out the top features? Report the average number of good features
recovered over the 1000 trials.
(d) Repeat part (c), but now use logistic regression with no penalty. Do this once with and once without
scaling the feature matrix. As a feature importance metric, use the absolute value of the coefficient
of that feature. Plot a histogram as before and report the average number of features recovered
over the 1000 trials. Compare the scaled and non-scaled versions. How does logistic regression
compare to decision trees?
(e) Does scaling features affect the result for decision trees? Explain.
(f) We now want to assess how often the two models (Decision trees and logistic regression (with
scaling)) identify the same features as being important. Using the set-up of part (c), for each trial,
record the number of overlaps for the top-5 ranked features for each of the two models. Plot a
histogram of the number of overlaps over all trials. For example, if on a particular trial, DT has
[1, 2, 3, 4, 5] in its top-5, and Logistic regression has [1, 2, 6, 7, 8], the number of overlaps for this trial
is 2.
(g) The approaches considered so far are called ”model-based” feature importance methods, since
they define importance with respect to a particular algorithm/model being used. Discuss some
1The reason we do not shuffle the data when creating it is that we want to be able to know which of the features are the most
important (first 5). We do not want to give the algorithm the ordered features as this may inflate the algorithm’s ability to find
important features, it may just break ties by looking at which features come first.
Page 3
potential disadvantages of using a model-based approach if your goal is to uncover truly impor tant features, referring to the previous exercises for evidence. For example, suppose that you are
studying a rare genetic disease and that the 20 features represent specific genetic features, only 5 of
which are truly associated with the disease. Further, discuss the effect of the number of redundant
features used when creating the data set.
Question 2. Greedy Feature Selection
We now consider a different approach to feature selection known as backward selection. In backward
selection, we:
1. start with all features in the model
2. at each round, we remove the j-th feature from the model based on the drop in the value of a
certain metric. We eliminate the feature corresponding to the smallest drop in the metric.
3. we repeat step 2 until there are no features left.
(a) Why do you think this is referred to as a greedy feature importance algorithm? What do you think
are some of the pitfalls of greedy algorithms in this context?
(b) Using the same set-up as in Question 1 part Q1 (a) write code implementing the backward elimina tion algorithm. Use a logistic regression model with no penalty, and the same metric as in Question
1 part (d). Be sure to generate the data without shuffling but then to shuffle the data before fitting
the model. Report the remaining features at round 15 (that is, when only 5 features are left). How
many of these are actually important features?
(c) Repeat part (a) for 1000 trials (similar to what is done in Q1 (c)). Plot a histogram of the number of
important features recovered, and report the average number of recovered features.
(d) Another approach is called best subset selection. This model generates all possible subsets, trains
a model on each subset, evaluates the performance and returns the subset with the highest per formance. For example, at the t-th round, we consider all subsets with t features. How does this
algorithm compare to backward selection? Will it always outperform backward elimination? What
are some disadvantages of this approach?
(e) Implement best subset selection in code. Repeat part (c) using your best subset implementation.
For computational reasons, set all parameters as in Q1 part (a), but with only 7 features, 3 of which
are to be taken to be informative, and the rest to be redundant. Plot a histogram as before and
report the average number of recoveries. Comment on your results.
(f) An alternative approach to feature importance is known as the Permutation Feature Importance
score, implemented in sklearn.inspection.permutation importance. Read the docu mentation and provide a detailed explanation of how permutation importance works. Compare it
to the techniques studied so far in this homework, and explain why we refer to this as a model independent metric. Do you think it’s more or less fair to compare logistic regression and decision
trees using this metric? Finally, using the sklearn implementation, re-do part Q2(c) using this new
feature importance metric. Similar to before, use 20 features, with 5 to be set as informative and
the rest as redundant.
Page 4

热门主题

课程名

mktg2509 csci 2600 38170 lng302 csse3010 phas3226 77938 arch1162 engn4536/engn6536 acx5903 comp151101 phl245 cse12 comp9312 stat3016/6016 phas0038 comp2140 6qqmb312 xjco3011 rest0005 ematm0051 5qqmn219 lubs5062m eee8155 cege0100 eap033 artd1109 mat246 etc3430 ecmm462 mis102 inft6800 ddes9903 comp6521 comp9517 comp3331/9331 comp4337 comp6008 comp9414 bu.231.790.81 man00150m csb352h math1041 eengm4100 isys1002 08 6057cem mktg3504 mthm036 mtrx1701 mth3241 eeee3086 cmp-7038b cmp-7000a ints4010 econ2151 infs5710 fins5516 fin3309 fins5510 gsoe9340 math2007 math2036 soee5010 mark3088 infs3605 elec9714 comp2271 ma214 comp2211 infs3604 600426 sit254 acct3091 bbt405 msin0116 com107/com113 mark5826 sit120 comp9021 eco2101 eeen40700 cs253 ece3114 ecmm447 chns3000 math377 itd102 comp9444 comp(2041|9044) econ0060 econ7230 mgt001371 ecs-323 cs6250 mgdi60012 mdia2012 comm221001 comm5000 ma1008 engl642 econ241 com333 math367 mis201 nbs-7041x meek16104 econ2003 comm1190 mbas902 comp-1027 dpst1091 comp7315 eppd1033 m06 ee3025 msci231 bb113/bbs1063 fc709 comp3425 comp9417 econ42915 cb9101 math1102e chme0017 fc307 mkt60104 5522usst litr1-uc6201.200 ee1102 cosc2803 math39512 omp9727 int2067/int5051 bsb151 mgt253 fc021 babs2202 mis2002s phya21 18-213 cege0012 mdia1002 math38032 mech5125 07 cisc102 mgx3110 cs240 11175 fin3020s eco3420 ictten622 comp9727 cpt111 de114102d mgm320h5s bafi1019 math21112 efim20036 mn-3503 fins5568 110.807 bcpm000028 info6030 bma0092 bcpm0054 math20212 ce335 cs365 cenv6141 ftec5580 math2010 ec3450 comm1170 ecmt1010 csci-ua.0480-003 econ12-200 ib3960 ectb60h3f cs247—assignment tk3163 ics3u ib3j80 comp20008 comp9334 eppd1063 acct2343 cct109 isys1055/3412 math350-real math2014 eec180 stat141b econ2101 msinm014/msing014/msing014b fit2004 comp643 bu1002 cm2030
联系我们
EMail: 99515681@qq.com
QQ: 99515681
留学生作业帮-留学生的知心伴侣!
工作时间:08:00-21:00
python代写
微信客服:codinghelp
站长地图