Publications
2015
Daly, Ian; Malik, Asad; Weaver, James; Hwang, Faustina; Nasuto, Slawomir J.; Williams, Duncan; Kirke, Alexis; Miranda, Eduardo
Proceedings of the Seventh Computer Science and Electronic Engineering Conference 2015 (CEEC'15), IEEE, 2015.
Abstract | Links | BibTeX | Tags: BCMI, GSR, Music generation
@conference{Daly2015CEEC,
title = {Towards Human-Computer Music Interaction: Evaluation of an Affectively-Driven Music Generator Via Galvanic Skin Response Measures},
author = {Ian Daly and Asad Malik and James Weaver and Faustina Hwang and Slawomir J. Nasuto and Duncan Williams and Alexis Kirke and Eduardo Miranda},
url = {http://www.iandaly.co.uk/wp-content/uploads/2016/01/Identifying-music-induced-emotions-from-EEG-for-use-in-brain-computer-music-interfacing.pdf},
doi = {10.1109/CEEC.2015.7332705},
year = {2015},
date = {2015-09-01},
booktitle = {Proceedings of the Seventh Computer Science and Electronic Engineering Conference 2015 (CEEC'15)},
pages = {87 - 92},
publisher = {IEEE},
abstract = {An affectively driven music generation system is described and evaluated. The system is developed for the intended eventual use in human-computer interaction systems such as brain-computer music interfaces. It is evaluated for its ability to induce changes in a listeners affective state. The affectively-driven algorithmic composition system was used to generate a stimulus set covering 9 discrete sectors of a 2-dimensional affective space by means of a 16 channel feedforward artificial neural network. This system was used to generate 90 short pieces of music with specific affective intentions, 10 stimuli for each of the 9 sectors in the affective space. These pieces were played to 20 healthy participants, and it was observed that the music generation system induced the intended affective states in the participants. This is further verified by inspecting the galvanic skin response recorded from participants.},
keywords = {BCMI, GSR, Music generation},
pubstate = {published},
tppubtype = {conference}
}
An affectively driven music generation system is described and evaluated. The system is developed for the intended eventual use in human-computer interaction systems such as brain-computer music interfaces. It is evaluated for its ability to induce changes in a listeners affective state. The affectively-driven algorithmic composition system was used to generate a stimulus set covering 9 discrete sectors of a 2-dimensional affective space by means of a 16 channel feedforward artificial neural network. This system was used to generate 90 short pieces of music with specific affective intentions, 10 stimuli for each of the 9 sectors in the affective space. These pieces were played to 20 healthy participants, and it was observed that the music generation system induced the intended affective states in the participants. This is further verified by inspecting the galvanic skin response recorded from participants.