skip to Main Content
The Musical-Moods Dataset: Multimodal Information Retrieval And Learning Through Human/Computational Creativity

Abstract
As part of the MUSICAL-MOODS Marie Skłodowska-Curie fellowship, a new multimodal and mood-based dataset will soon be released online. The dataset comprises audio excerpts, vector-based 3D animations and dance video recordings from automatic sound generation with professional dancers, indexed by language modelling of the participants. The dataset features Paolizzo’s VIVO interactive music system and a 30-camera Vicon motion capture system coupled with a green-screen digital video capture environment. The talk presents the current state of the project and the next stages of research.

Short biography

Fabio Paolizzo’s research takes an interdisciplinary approach at the meeting point between Music and the Arts, Computer Science and Cognitive Science, focusing on computational creativity. Dr. Paolizzo is currently Principal Investigator for the Musical-Moods project, as Project Scientist at University of Rome Tor Vergata in the Dept. of Electronic Engineering, and as Postdoc research fellow with a joint appointment at UC Irvine in the Dept. of Cognitive Sciences and the Dept. of Dance. He holds a BA (Hons) and MA (Hons) in Systematic Musicology from University of Rome Tor Vergata, as well as a PhD in Music and Technology from University of Kent.

Dr. Paolizzo develops intelligent and interactive music technology, comprising new methods for artificial intelligence and information retrieval, and uses this technology in creative practice to establish insight for leading innovation and addressing societal challenges.

Relatore:
Dr. Fabio Paolizzo
Luogo:
Cnr- Area della Ricerca di Palermo
Data:
18/01/2019 11:00 am
Back To Top