interpret_community.adapter.explanation_adapter module

Defines an adapter for creating an interpret-community style explanation from other frameworks.

class interpret_community.adapter.explanation_adapter.ExplanationAdapter(features=None, classification=False, method='Adapter')

Bases: object

An adapter for creating an interpret-community explanation from local importance values.

Parameters:
  • features (list[str]) – A list of feature names.
  • classification (bool) – Indicates if this is a classification or regression explanation.
  • method (str) – The explanation method used to explain the model (e.g., SHAP, LIME).
create_global(local_importance_values, evaluation_examples=None, expected_values=None, include_local=True, batch_size=100)

Create a global explanation from the list of local feature importance values.

Parameters:
  • local_importance_values (numpy.array or scipy.sparse.csr_matrix or list[scipy.sparse.csr_matrix]) – The feature importance values.
  • evaluation_examples (numpy.array or pandas.DataFrame or scipy.sparse.csr_matrix) – A matrix of feature vector examples (# examples x # features) on which to explain the model’s output.
  • expected_values (numpy.array) – The expected values of the model.
  • include_local (bool) – Include the local explanations in the returned global explanation. If include_local is False, will stream the local explanations to aggregate to global.
  • batch_size (int) – If include_local is False, specifies the batch size for aggregating local explanations to global.
create_local(local_importance_values, evaluation_examples=None, expected_values=None)

Create a local explanation from the list of local feature importance values.

Parameters: