Explainable AI

Explainable AI

Read below how Finarb's solutions, powered by Explainable AI, can help you go under the hood of traditional black-box AI/ML models, to make your model insights accountable and effective

Contact us

Explainable AI

Explainable AI (XAI) is an emerging field in machine learning that aims to address how black box decisions of AI systems are made. This area inspects and tries to understand the steps and models involved in making decisions.

This part presents an overview of some of the most important drivers of XAI research such as establishing trust, regulatory compliance, detecting bias, AI model generalization and debug.


This part takes a deep dive into what it really means to explain AI models in terms of existing definitions, the importance of explanation, users’ roles and given application, possible tradeoffs, etc.


This part looks into achieving explainability prior to the modelling stage. This involves reviewing a set of methodologies for better understanding and documentation of datasets used for modelling.

How: Pre-modelling

This reviews family of methodologies to achieve explainable modelling including adopting inherently explainable models, hybrid models, joint prediction and explanation, and explainability by regularization and architectural adjustments.

How: Modelling

This part presents a novel taxonomy of post-modelling explainability methodologies. This taxonomy is then used as the underlying structure for reviewing the related literature work. This helps to keep the platform updated with latest works in XAI

How: Post-modelling

How we explain


Explainability is motivated due to lacking transparency of the black-box approaches, which do not foster trust and acceptance of AI generally and ML specifically - Rising legal and privacy aspects, as well as lack of actionable insights

Expectation from user Perspective

XAI assumes that an explanation is provided to an end user who depends on the decisions, recommendations, or actions produced by an AI system, yet there could be many different kinds of users, like an analyst, business user, sales users or customers.

How we solve

Our framework accounts for all users across the value chain, providing aggregate or local explanations. The explaianability is achieved by a combination of open source and model agnostic methods using attention mechanisms, relevance propagation, etc.


Our platform understand the the user's placement in the value chain and based on that decided on the accuracy-explainability trade off which in turn decides the mode of explanations and insights to deliver to the user.

Let's Get Started
Contact Us