Science

Built on science, not vibes.

lumenx is built by behavioral scientists and computational researchers. Our methodology draws on validated frameworks from multiple disciplines. We publish our work and validate our own tools.

No single discipline can explain why people do what they do.

lumenx sits at the intersection of behavioral science, computational cognitive science, natural language processing, and psychometrics. Our team trained and published across these fields before building the platform.

lumenxBehavioral ScienceComputational Cognitive ScienceNatural Language ProcessingPsychometrics

The Behavioral AI Institute

Our CEO co-founded the Behavioral AI Institute, a non-profit research organization dedicated to ensuring AI development is informed by deep knowledge of human cognition, behavior, and values. The Institute brings together researchers from Harvard, UCL, Duke, and the University of Exeter. Its open letter calling for behavioral science as a core discipline in AI has gathered over 530 signatories.

behavioralai-institute.org

Deep Tech classification

In 2025, Bpifrance classified lumenx as Deep Tech, a designation awarded after technical review by government innovation experts. It recognizes technologies with a genuine scientific foundation that required significant R&D, as opposed to software built on existing tools. For lumenx, it reflects the combination of proprietary behavioral science ontologies, computational cognitive models, and AI conversation architecture that underpin the platform.

Behavioral AI InstituteBpifrance Deep TechLSEENSUniversity of ZurichMIT DesignX
Our research

Published work from the team behind lumenx.

Our team publishes across behavioral science, computational neuroscience, and applied management. We bridge academic research and real-world practice.

The missing discipline in AI: a call for behavioural science

Sacher, Michie, Hauser, Ferrère, Salzer, Rodger, Schaich Borg, Pogrebna & Murphy. Wellcome Open Research, 2026.

Wellcome Open ResearchHarvard Business ReviewMIT Sloan Management ReviewTHOMSON REUTERS
Our validation

We validate our own methodology.

235 adults with hearing loss across the UK and US, randomly assigned to one of three methods: AI-conducted behavioral conversation, traditional survey, or human-led interview. Same research question. Same population. Three different approaches.

94%

As good as human

of participants rated AI conversations as good as or better than a human-led interview. 59% rated it better.

n = 54 post-session comparisons

Richer responses

more participant language than surveys. 852 words per AI conversation vs. 216 words per survey response.

98 AI conversations, 79 surveys

8.7/10

Safety & comfort

average across comfort, emotional sensitivity, and feeling listened to. Participants disclosed personal experiences freely.

composite of 3 post-session measures

2 days

98 conversations

to complete all AI interviews. The same number of human interviews took two weeks for 30 participants.

same population, same questions

Post-session experience scores

Every participant rated their experience immediately after. AI scores tracked close to human-led interviews across all five dimensions.

AI (n=60)
Human (n=30)
Comfort
8.99.7
Listened to
8.69.8
Emotional sensitivity
8.49.8
Question relevance
8.69.8
Overall satisfaction
8.49.6
0–10 slider scale, post-session questionnaire

Participant language per session

Behavioral conversations draw out substantially more language than fixed-format surveys. More language means more material for thematic analysis.

AI conversation852 words
Human interview725 words
Survey (open-ended)216 words
average participant words per session

See the evidence in action.