How we explain
Explainability is motivated due to lacking transparency of the black-box approaches, which do not foster trust and acceptance of AI generally and ML specifically - Rising legal and privacy aspects, as well as lack of actionable insights
Expectation from user Perspective
XAI assumes that an explanation is provided to an end user who depends on the decisions, recommendations, or actions produced by an AI system, yet there could be many different kinds of users, like an analyst, business user, sales users or customers.
How we solve
Our framework accounts for all users across the value chain, providing aggregate or local explanations. The explaianability is achieved by a combination of open source and model agnostic methods using attention mechanisms, relevance propagation, etc.
Our platform understand the the user's placement in the value chain and based on that decided on the accuracy-explainability trade off which in turn decides the mode of explanations and insights to deliver to the user.