Most deepfake detection models focus on a binary question: is this face
real or fake? What they don't account for is how the subject's facial
expression — happiness, anger, surprise — affects that judgment. My
research investigates emotion-wise bias in deepfake detection,
a genuinely underexplored problem with real implications for the fairness
of automated media forensics.
Working under Prof. Manoranjan Dash (Dean, School of
Computing & Data Sciences, FLAME University), I led a four-member team
through the full research pipeline: data acquisition, preprocessing, model
evaluation, and paper drafting. Using frequency-domain modeling and a
fine-tuned ResNet CNN, we improved baseline detection accuracy from 50%
to 75% and reduced emotion-specific bias by 9% — all while maintaining
strong F1-scores across emotional categories.
The work is ongoing. The goal is a detection framework that doesn't just
catch deepfakes, but catches them consistently regardless of what the face
in the video is doing.