interpret_community.shap.tree_explainer module¶
Defines the TreeExplainer for returning explanations for tree-based models.
-
class
interpret_community.shap.tree_explainer.
TreeExplainer
(model, explain_subset=None, features=None, classes=None, shap_values_output=<ShapValuesOutput.DEFAULT: 'default'>, transformations=None, allow_all_transformations=False, **kwargs)¶ Bases:
interpret_community.common.structured_model_explainer.PureStructuredModelExplainer
-
available_explanations
= ['global', 'local']¶
-
explain_global
(evaluation_examples, sampling_policy=None, include_local=True, batch_size=100)¶ Explain the model globally by aggregating local explanations to global.
Parameters: - evaluation_examples (numpy.array or pandas.DataFrame or scipy.sparse.csr_matrix) – A matrix of feature vector examples (# examples x # features) on which to explain the model’s output.
- sampling_policy (interpret_community.common.policy.SamplingPolicy) – Optional policy for sampling the evaluation examples. See documentation on SamplingPolicy for more information.
- include_local (bool) – Include the local explanations in the returned global explanation. If include_local is False, will stream the local explanations to aggregate to global.
- batch_size (int) – If include_local is False, specifies the batch size for aggregating local explanations to global.
Returns: A model explanation object. It is guaranteed to be a GlobalExplanation which also has the properties of LocalExplanation and ExpectedValuesMixin. If the model is a classifier, it will have the properties of PerClassMixin.
Return type: DynamicGlobalExplanation
-
explain_local
(evaluation_examples)¶ Explain the model by using shap’s tree explainer.
Parameters: evaluation_examples (DatasetWrapper) – A matrix of feature vector examples (# examples x # features) on which to explain the model’s output. Returns: A model explanation object. It is guaranteed to be a LocalExplanation which also has the properties of ExpectedValuesMixin. If the model is a classifier, it will have the properties of the ClassesMixin. Return type: DynamicLocalExplanation
-
explainer_type
= 'specific'¶ The TreeExplainer for returning explanations for tree-based models.
Parameters: - model (lightgbm, xgboost or scikit-learn tree model) – The tree model to explain.
- explain_subset (list[int]) – List of feature indices. If specified, only selects a subset of the features in the evaluation dataset for explanation. The subset can be the top-k features from the model summary.
- features (list[str]) – A list of feature names.
- classes (list[str]) – Class names as a list of strings. The order of the class names should match that of the model output. Only required if explaining classifier.
- shap_values_output (interpret_community.common.constants.ShapValuesOutput) – The type of the output when using TreeExplainer. Currently only types ‘default’ and ‘probability’ are supported. If ‘probability’ is specified, then the raw log-odds values are approximately scaled to probabilities from the TreeExplainer.
- transformations (sklearn.compose.ColumnTransformer or list[tuple]) –
sklearn.compose.ColumnTransformer or a list of tuples describing the column name and transformer. When transformations are provided, explanations are of the features before the transformation. The format for a list of transformations is same as the one here: https://github.com/scikit-learn-contrib/sklearn-pandas.
If you are using a transformation that is not in the list of sklearn.preprocessing transformations that are supported by the interpret-community package, then this parameter cannot take a list of more than one column as input for the transformation. You can use the following sklearn.preprocessing transformations with a list of columns since these are already one to many or one to one: Binarizer, KBinsDiscretizer, KernelCenterer, LabelEncoder, MaxAbsScaler, MinMaxScaler, Normalizer, OneHotEncoder, OrdinalEncoder, PowerTransformer, QuantileTransformer, RobustScaler, StandardScaler.
Examples for transformations that work:
[ (["col1", "col2"], sklearn_one_hot_encoder), (["col3"], None) #col3 passes as is ] [ (["col1"], my_own_transformer), (["col2"], my_own_transformer), ]
An example of a transformation that would raise an error since it cannot be interpreted as one to many:
[ (["col1", "col2"], my_own_transformer) ]
The last example would not work since the interpret-community package can’t determine whether my_own_transformer gives a many to many or one to many mapping when taking a sequence of columns.
- allow_all_transformations (bool) – Allow many to many and many to one transformations
-