Psychological vulnerability, human-AI interaction, digital ethics
How do conversational AI systems affect vulnerable users? What patterns of emotional dependency emerge? My clinical work (10,000+ hours of direct patient contact) provides "ground truth" on psychological edge cases — manipulation, emotional collapse, dependency formation — that I now articulate with AI safety research.
Behavioral analysis through smartphones and digital traces raises major epistemological and ethical questions. My work interrogates the reductionist assumptions of digital mental health tools and proposes a psychoanalytic reading of data-related challenges.
How does technology mediate the therapeutic relationship, body perception, self-representation? I have studied these questions since 2012, particularly through clinical work with cochlear implant recipients and patients with sensory prosthetics.
Loading publications from ORCID...
No publications match your search.
Full profile: ORCID 0000-0002-9079-5710
2020-2022
COV-CARE International Network
Coordinator. 30 researchers, 5 continents. Tele-psychotherapy during pandemic: human-technology interaction under crisis.
2016-2021
ANR LIGHT4DEAF
Work Package Director (€200k). Psychological impact of sensory prosthetics and technology-mediated perception.
2015-2018
DéPsySurdi
Principal Investigator, Fondation Maladies Rares (€90k). Psychopathology and deafblindness.
2025-2026
Research leave dedicated to AI Safety applications
Psychological risk assessment frameworks for conversational AI. User vulnerability pattern identification in human-AI interactions. Ethical analysis of behavioral data collection and algorithmic mediation.