2024 Winner: Fair Augmentation of Decision Trees Through Selective Node Retraining

Project Information
Fair Augmentation of Decision Trees Through Selective Node Retraining
Engineering
Artificial Intelligence Explainability Accountability (AIEA) Lab
With modern machine learning models becoming ever more complex and embedded
within society, there is a need for accurate, interpretable, and fair models that users
can trust. Decision trees being a fully interpretable model, fill this role perfectly.
Current research shows that algorithms exist that can train decision trees to be
both accurate and fair. Despite this, decision trees are often trained solely on
accuracy, resulting in biased or discriminative pre-trained trees. Frequently, the
root of the bias in these pre-trained trees stems from a few select nodes or subtrees.
In this paper, I propose a novel method of selective fair retraining of decision trees,
modifying discriminative nodes to remove bias and retain a high accuracy. The
experimental results indicate that the proposed tree modification method can result
in fair decision trees with higher accuracy than those trained from scratch.
PDF icon 1629.pdf
Students
  • Coen Timothy Adler (Crown)
Mentors