trust.explainable

Module Contents

Classes

Explainable

A Trustworthy-AI interface that defines the capabilities of objects (e.g., Models, Graphs)

class trust.explainable.Explainable

A Trustworthy-AI interface that defines the capabilities of objects (e.g., Models, Graphs) that are Explainable, i.e., they can self-explain certain decisions they have made. For example, an Explainable Model should be able to provide a detailed explanation of a specific decision it made at some specified time or on a given set of inputs in the past.

explain(self, decision=None, constituent=Constituency.ANY, aspect=Aspect.ANY)

Returns an explanation for a decision made by the Model based on Who’s asking and why.

Parameters:

constituent : Constituency: The Constituency asking for the explanation aspect : Aspect: The Aspect of the question (e.g., Accountable, Functional, Operational) decision : array-like: the input data of the decision to be explained

Returns:

out : Specific decision explanation (e.g., SHAP or LIME)