Skip to main content
  1. Glossary/
  2. M/

Model Drift

165 words·1 min
Table of Contents

Definition
#

Model drift occurs when a machine learning model’s predictions become less accurate over time as real-world data distributions deviate from the training data.

Key Characteristics
#

  • Two main types: concept drift (changes in relationships between variables) and data drift (changes in input data distribution)
  • Often caused by shifting user behavior, market trends, or sensor degradation
  • Requires continuous monitoring

Why It Matters
#

Unaddressed drift leads to flawed business decisions. A 2022 Fiddler AI study found 92% of models degrade within 3 months of deployment.

Common Use Cases
#

  1. E-commerce recommendation engines
  2. Fraud detection systems
  3. Predictive maintenance models

Examples
#

  • Monitoring tools: Evidently AI, Amazon SageMaker Model Monitor
  • Mitigation: Retraining pipelines with fresh data
  • Affected models: COVID-era demand forecasting systems

FAQs
#

Q: How often should models be checked for drift?
A: Critical systems: weekly/daily. Others: monthly. Use statistical tests like Kolmogorov-Smirnov.

Q: Is model drift the same as data quality issues?
A: No—drift assumes valid but evolving data, while quality issues involve errors/outliers.