top of page

KUMA

Kuma is a pioneering platform for quantum-enhanced generative music, where human emotion and artificial intelligence meet in real time.  Through non-invasive EEG, Kuma listens to the body and transforms raw physiological signals into immersive audiovisual landscapes. 

0001_Prensa_ProjectArea_DiazEva _F37A0730 copy_edited.jpg

 At its core lies the Quantum Emotional Algorithm, mapping Valence, Arousal, and Presence into dynamic environments of sound and image. Inspired by Taoist philosophy and quantum mechanics, the system captures subtle emotional shifts and translates them into experiences that evolve with every heartbeat, every breath.

 

Kuma is not a soundtrack but a biological dialogue, a continuous feedback loop, where earth, water, air, and fire emerge as elemental environments. Powered by advanced neural audio engines and real-time generative visuals, emotions take shape as living landscapes that respond instantly to the inner state of the listener. This dialogue extends beyond art into therapy, research, and immersive media. In clinics, Kuma provides neuro-adaptive tools for mindfulness, stress reduction, and emotional regulation. In XR and immersive platforms, it sustains seamless experiences where sound never breaks immersion. In research labs, it opens new frontiers for studying affect, cognition, and the intersection of human emotion with machine intelligence. 

 By 2030, adaptive music will sit at the core of therapy, immersive experiences, and XR. Yet most digital health tools today fail within two weeks, most clinics lack emotion-adaptive technologies, and mismatched music can even increase anxiety. Kuma responds to this gap with a clinically ready emotional engine, API-driven and trained on proprietary biometric datasets. 

ChatGPT Image 26 set 2025, 19_43_11_edited.jpg

 Can technology sense and heal emotion? Kuma turns this question into experience. It does not flatten complexity, it resonates with it. More than a technological system, it is a living interface between human and machine, a step toward emotional intelligence that does not replace us, but harmonizes with us. 

 

 

Credits:

Marco Accardi - Project Leader, Software Engineer, Creative Director

Alessandro De Angelis - Studio Manager

Sabrina Pippa - Head of Design and Communication
Quantum Basel - Quantum Computing Hardware Access Provider

Alessandro Inguglia - Software Engineer

Giovanni Bindi - Audio ML Researcher

Robin Otterbein - Audio ML Engineer

Luca Marinelli - MIR Researcher

We thank also for their support and insights:

Dr. Enrique Solano
Dr. Eric Michon
Dr. Rulin Xiu

Giovanni Palmisano Jan Mikolon
Matteo Krummenacher Maximilian Wurzer Rajiv Krishnakumar Alberto Di Maria

Paulo Vitor Itaboraí de Barros Researchers at ACIDS Lab

Photography Eva Diaz @thisisnotevadiaz

bottom of page