Research

Emotion Detection

Interpretable Social Behavior Analaysis

While the deep learning models offer rich contextual insights, their blackbox nature limits interpretability and overlooks user-specific stylistic patterns. In this work, I developed a novel interpretable handcrafted feature representation to capture users' distinct domain-specific emotional cues from social media microblogs. By employing genetic algorithmic approach, I extracted and combined three user-specific interpretable features: Stylistic (S), Sentiment (SE), and Linguistic(L). Furthermore, I extended this work by fusing the interpretable handcrafted representation with rich and contextual LLM-driven latent deep features. This bi-modal architecture utilizes the complementary strengths of diverse feature groups, improving accuracy, robustness, and interpretability. This is the first work to combine interpretable handcrafted features with LLM-driven contextual deep features for bi-modal emotion detection, laying the groundwork for fostering robust and trustworthy human-machine teaming by capturing behavioral cues with greater precision.

Related Publications:

Trustworthy AI

Trustworthy AI

Trust, Fairness, and Explainability in AI

My research investigates trust, fairness, bias, and explainability in AI-driven social media data mining. It highlights how biases in data preparation, processing, and modeling distort AI predictions, leading to unfair outcomes and eroding user trust. To address these challenges, I proposed three mitigation strategies: curating diverse datasets, implementing transparency-focused datasheets, and incorporating qualitative analysis for bias detection. These efforts promote fairness and accountability, ensuring AI systems are more transparent and ethical in social media applications.

I developed a Trustworthy AI (TXAI) framework that integrates affective traits, personality, and social influences for ethical AI decision-making. This framework employs a weighted trust assessment mechanism and emphasizes ethical data practices, robust bias mitigation, and explainability. Collaborating with interdisciplinary teams, I explored governance, regulatory compliance, and transparency across AI workflows. My research improves bias detection through multi-source data analysis and iterative evaluation, producing actionable guidelines for responsible AI deployment. This work supports user-centered AI, aligning AI technologies with societal values to ensure fairness, privacy, and trustworthiness.

Related Publications:

XAI for Mental Health and Well-being

XAI for Mental Health and Well-being

My research explores Canadians' perception of trust in AI for healthcare, using LLM-driven opinion mining to uncover public concerns and trust factors. Additionally, my ongoing research investigates how users' emotional, personality, and linguistic attributes extracted from social media mental health discussions, can be leveraged to identify diverse root causes of mental health issues and develop XAI framework to provide personalized and transparent policy recommendations. This work aims to bridge AI and psychology for ethical, explainable, and user-centered mental health support.

Related Publications:

  • A Digital Lighthouse: Exploring Health Concerns and Public Trust Using LLM-Driven Opinion Mining from Canadian Reddit Communities (Under Review, PLOS One, 2024)