lumenx is built by behavioral scientists and computational researchers. Our methodology draws on validated frameworks from multiple disciplines. We publish our work and validate our own tools.
lumenx sits at the intersection of behavioral science, computational cognitive science, natural language processing, and psychometrics. Our team trained and published across these fields before building the platform.
Our CEO co-founded the Behavioral AI Institute, a non-profit research organization dedicated to ensuring AI development is informed by deep knowledge of human cognition, behavior, and values. The Institute brings together researchers from Harvard, UCL, Duke, and the University of Exeter. Its open letter calling for behavioral science as a core discipline in AI has gathered over 530 signatories.
behavioralai-institute.orgIn 2025, Bpifrance classified lumenx as Deep Tech, a designation awarded after technical review by government innovation experts. It recognizes technologies with a genuine scientific foundation that required significant R&D, as opposed to software built on existing tools. For lumenx, it reflects the combination of proprietary behavioral science ontologies, computational cognitive models, and AI conversation architecture that underpin the platform.
Our team publishes across behavioral science, computational neuroscience, and applied management. We bridge academic research and real-world practice.
235 adults with hearing loss across the UK and US, randomly assigned to one of three methods: AI-conducted behavioral conversation, traditional survey, or human-led interview. Same research question. Same population. Three different approaches.
of participants rated AI conversations as good as or better than a human-led interview. 59% rated it better.
n = 54 post-session comparisons
more participant language than surveys. 852 words per AI conversation vs. 216 words per survey response.
98 AI conversations, 79 surveys
average across comfort, emotional sensitivity, and feeling listened to. Participants disclosed personal experiences freely.
composite of 3 post-session measures
to complete all AI interviews. The same number of human interviews took two weeks for 30 participants.
same population, same questions
Every participant rated their experience immediately after. AI scores tracked close to human-led interviews across all five dimensions.
Behavioral conversations draw out substantially more language than fixed-format surveys. More language means more material for thematic analysis.