Skip to main content
  1. Glossary/
  2. E/

Explainable AI (XAI)

136 words·1 min
Table of Contents

Definition
#

Explainable AI (XAI) refers to artificial intelligence systems that provide human-interpretable reasons for their outputs, enabling users to understand, trust, and manage AI decisions.

Key Characteristics
#

  • Model transparency features
  • Local interpretable model-agnostic explanations (LIME)
  • SHAP (SHapley Additive exPlanations) values
  • Regulatory compliance (GDPR, AI Act)

Why It Matters
#

Critical for high-stakes domains like healthcare and finance where understanding AI decisions is legally and ethically required. Reduces “black box” concerns.

Common Use Cases
#

  1. Credit approval systems
  2. Medical diagnosis support
  3. Fraud detection explanations

Examples
#

  • LIME Python library
  • IBM Watson OpenScale
  • TensorFlow’s What-If Tool

FAQs
#

Q: Does XAI reduce model accuracy?
A: Modern methods like SHAP provide explanations without compromising performance in most cases.

Q: Is XAI required by law?
A: The EU AI Act mandates XAI for high-risk AI systems starting 2026.