![]() ![]() ![]() Permutation_importance for each of the scores as it reuses Passing multiple scores to scoring is more efficient than calling Names and the values are the metric scores Ī dictionary with metric names as keys and callables a values. If scoring represents multiple scores, one can use:Ī callable returning a dictionary where the keys are the metric If scoring represents a single score, one can use:Ī single string (see The scoring parameter: defining model evaluation rules) Ī callable (see Defining your scoring strategy from metric functions) that returns a single value. scoring str, callable, list, tuple, or dict, default=None Targets for supervised or None for unsupervised. y array-like or None, shape (n_samples, ) or (n_samples, n_classes) X ndarray or DataFrame, shape (n_samples, n_features)ĭata on which permutation importance will be computed. Parameters : estimator objectĪn estimator that has already been fitted and is compatible Is defined to be the difference between the baseline metric and metric from Is permuted and the metric is evaluated again. Next, a feature column from the validation set First, a baseline metric,ĭefined by scoring, is evaluated on a (potentially different)ĭataset defined by the X. Importance of a feature is calculated as follows. X can be theĭata set used to train the estimator or a hold-out set. The estimator is required to be a fitted estimator. Permutation importance for feature evaluation. permutation_importance ( estimator, X, y, *, scoring = None, n_repeats = 5, n_jobs = None, random_state = None, sample_weight = None, max_samples = 1.0 ) ¶ ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |