View on mobile

To help keep our community authentic, we're showing information about accounts on Linktree.
Narjès Bahhar conducts doctoral research at KU Leuven focused on fairness and bias in artificial intelligence systems, specifically within natural language processing applications. Her academic work examines responsible AI development methodologies and algorithmic fairness frameworks. She investigates technical approaches for measuring and mitigating bias in language models. Her X platform content synthesizes developments from major AI conferences, technical machine learning papers, and ongoing academic discourse in AI ethics. She publishes long-form analysis on Substack exploring human-machine collaboration, urban technology integration, and emerging systems like humanoid robotics. Her writing connects academic research findings to practical implementation challenges in AI deployment. The analysis spans blockchain applications, language model capabilities, and automated decision systems through a sociotechnical lens. Her research contributions appear in peer discussions of AI safety protocols, fairness metrics, and responsible innovation frameworks. She regularly engages with academic and industry practitioners working on AI governance and ethics.