component-id | name | description | type | work-package | project | resource | demo | licence | related-components | contributors | bibliography | |||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
deep-listening |
Deep Listening |
Software, methods and user studies exploring the cross-modal interpretation of music and visual art |
UserInterface |
|
polifonia-project |
|
|
|
|
Deep Listening is being carried out as part of the Polifonia project to investigate how the cross-modal interpretation of music and visual art can enhance what you hear and what you see.
The work further extends the Deep Viewpoints software that was developed as part of the EU H2020 SPICE project to support the process of Slow Looking at visual art. Within Deep Viewpoints, the processes of observing and responding to art are guided by scripts. Each script is made up of a sequence of stages containing artworks, statements and various prompts or questions to which the reader of the script can respond. During the EU H2020 funded SPICE project, the Irish Museum of Modern Art (IMMA) used Deep Viewpoints as part of an initiative to reach communities traditionally underserved by the museum sector and to bring new perspectives to the museum’s collection and exhibitions. Participating communities were not only involved in interpreting artworks with the guidance of the scripts but also creating new scripts, mediating how others observe and think about art.
Recent work in collaboration between the Polifonia and SPICE projects has investigated how Deep Viewpoints could be extended to support the cross-modal interpretation of music and visual art. First, support was added for embedding YouTube videos within the scripts to support listening concurrent with viewing artworks, reading associated text, and answering pro- vided prompts within the page of the app. Second, functionality to support multiple choice as well as free text responses to questions was added to support the rating of music on a scale or selecting an emotion that matched the music. Third, a responsive web de- sign (RWD) approach was taken to supporting both the following and authoring of scripts. This was done to potentially support the following and authoring of scripts on personal smartphones (potentially with headphones) while in the museum as well as on larger screen devices.
The revised software has been used in two ways: (i) a musicologist curating experiences that link music to visual art in a museum collection, and (ii) visitors to a museum exhibition experiencing and creating cross-modal experiences.
A second experiment into Deep Listening has explored how cross modal interpretation works online using a range of music and visual art.
The browsing and navigation paradigm developed for Deep Listening is also being applied to support the public in exploring the ORGANS dataset.