Heritage Soundscapes

At my viva my external examiner pointed me towards this interesting paper, which she had co-authored – partly, I think, as an example of how I should restructure the discussion of my Chawton experiment in my thesis. But it contains some real gems ( like “the museums studies literature points out the restorative value of an aesthetic experience that is clear of any information acquisition or learning objective and is centred instead on the sensorial experience of being there”) that makes me regret missing it in my literature review: Marshall, M. , PETRELLI, D., DULAKE, N., NOT, E., MARCHESONI, M., TRENTI, E. & PISETTI, A.. 2015. Audio-based narratives for the trenches of World War I : intertwining stories, places and interaction for an evocative experience. International Journal of Human-Computer Studies 27-39.

It’s a case study of a prototype “visitor­ aware personalised multi­point auditory narrative system that automatically plays sounds and stories depending on a combination of features such as physical location, visitor proximity and visitor preferences” Voices from the Trenches for a First World War exhibition at the Museo Storico Italiano della Guerra in Italy. What particularly interest me is that its part of the Mesch project which has some other outcomes which I refer to in my thesis. The paper describes their intent to move away from what they call “the information­ centric approach of cultural heritage.” I am sure a number of my professional colleagues would bridle somewhat at this accusation. After all, did not Tilden tell us in the 50’s that interpretation was more than mere information? But one od the things that my Chawton experiment uncovered was that actually too much “interpretation” turns out to be mere information after all.

The authors summarise previous experiments in responsive soundscapes, such as LISTEN, which “composes a soundscape of music and/or commentaries depending on the detected visitor’s behaviour: visitors that are not close or are moving are classified as unfocussed and for them a soundscape is created, while visitors that are standing still and close to the artwork are classified as focussed and a narrative (e.g. the curator describing the artwork) is played over the headphones.” Though many soundscapes are delivered by headphone, to avoid sound pollution for other visitors, the interesting project SottoVoce is designed around eavesdropping on what other people in our party are listening to. Half the respondents (in groups of two) heard the soundscape from each others phone speakers, while the other half had headphones. “When in loudspeaker mode visitors focussed on what was displayed on the screen of the mobile device and stayed close to the sound source while partners linked via the same audio on their headphones had a more dynamic visit driven by each other’s interest in the exhibits.”

“The ability to convey and evoke emotion is a fundamental aspect of sound” they say, and explain “The affective power of voice and audio storytelling has been recognised as creating a connection to the listener and is even amplified when spoken words are not coupled with the visual capture of the storyteller, creating a sense of intimacy and affective engagement.” An they built their soundscapes using the same sort of mix of music, speech and other sounds that I used (in a limited fashion) at Chawton. Some of the primary source material was recorded to sound more like oral history, with actors reading the words “with palpable emotion” to be more affective. The responsiveness is similar to that of LISTEN, but the “staying still” metric isn’t used, instead a simpler proximity method is used. woven into that soundscape are voice recordings for attentive listening, which is selected by the visitor choosing from a selection of cards. The sound was delivered by loudspeakers but, unlike SottoVoce, not on people’s own devices, rather places around the site. This was what I did for Chawton UNtours too.

The particular challenge with this project was that it was outdoors.The difficulties of maintaining equipment, connecting power and data etc means that most sites resort to delivering via mobile device. But on the other hand: “While engagement in a museum tends to be via prolonged observation, in an outdoor setting multiple senses are stimulated: there is the physical, full­body experience of being there, the sight and the sound of the surroundings, possibly the smell too. The multi-sensory setting places the visitor in direct connection with the heritage and enables engagement at an emotional, affective level rather than at a pure informative level.” (p6) The danger of using a mobile device to deliver interpretation is one I wrote about here, but essentially it stake them out of the where they are, it is the antithesis off presence.

With all this in mind the designers of the project set out five clear principles:

  • To engage at multiple levels, not just cognitive
  • To focus the visitors’ attention on the heritage, not the technology
  • To deal with group dynamics sensibly
  • To be provocative and surprise visitors, but design simple and straightforward interactions
  • To personalize content on the basis of clear conditions

The choice of sound over anything screen-based was an outcome of the second principle. Loudspeakers rather than headphones was also an attempt to focus attention on the heritage: “During a small experiment in a local outdoor heritage site, we observed that audio creates a wider attraction zone where passers­by become aware of the sound source, and a closer engagement zone around the emitting point where one has to stop and listen in order to understand what the voice says.”

So they designed a soundscape that featured music nd sound to attract visitor to a location and then vice recording to hold them there. The narratives are arranged thematically, with different voices (authoritative and intimate) indicating the nature of the content. Quite how the visitor chooses is not really made clear but I expect it is by approaching the voices that most attract them.

The team trialed the idea by observing the visitors behaviour using about 23 minutes of content, but I was disappointed that they did not come up with any solutions to the problems we encounter trying to evaluate the soundscape at The Vyne. It is hard to observe and distinguish between active listening and background listening. The authors seen to assume that if the active listening content is playing, then the partiocilapants are actively listening. The only evidence they have for this is a qualitative questionnaire, which I am not convinced is an accurate measure on engagment. Yes they said they enjoyed an benefitted from the experience, but if they did not know that was what was being tested, what proportion would have even mentioned the soundscape.

Of course they identified a number of challenges, not least fine-tuning the volume to be loud enough to attract attention and yet not so loud to cause discomfort. This is especially true of the different voices, with some by necessity quieter and more intimate. Of course they also predicted issues overs scalability – similar to the ones I planned fro but wasn’t able to properly test at Chawton “how well would such a system work in a busy environment with many groups interacting.”

Resonance: Sound, music and emotion in historic house interpretation

Just drafted an abstract for my Sound Heritage presentation:

This presentation explores what computer games can teach us about emotional engagement in cultural heritage interpretation. Beginning with a model of emotional affect drawn from the work of Panksepp and Biven (Panksepp, 2012), Lazarro (Lazarro, 2009), Sylvester (Sylvester, 2013)and Hamari et al (Hamari et al., 2014), it reveals how music especially has become a versatile emotional trigger in game design.

Drawing on the work of Cohen (Cohen, 1998)and Collins (Collins, 2008)eight functions that music has in games:

Masking – Just as music was played in the first movie theaters, partly to mask the sound of the projector, so music in new media can be used to mask the whir of the console’s or PC’s fan.

Provision of continuity – A break in the music can signal a change in the narrative, or continuous music signals the continuation of the current theme.”

Direction of attention – patterns in the music can correlate to patterns in the visuals, directing the attention of the user.

Mood induction; and,
Communication of Meaning- the nice distinction here is between music that makes the user sad, and music that tells the user “this is a sad event” without necessarily changing the user’s mood.

A cue for memory – The power of the music to invoke memories or prepare the mind for a type of cognitive activity is well recognized in advertising and sonic brands such as those created for Intel and Nokia.

Arousal and focal attention – With the user’s brain stimulated by music s/he is more able to concentrate on the diagesis of the presentation.

Aesthetics – The presentation argues that all too often music is used for aesthetic value only in museums and heritage sites, even if the pieces of music used are connected historically with the site or collection.

As an example, the presentation describes a project to improve the way music is used in the chapel at the Vyne, near Basingstoke. Currently, a portable CD player is used to fill the silence with a recording of a cathedral choir, pretty, but inappropriate for the space and for it’s story. A new recording is being made to recreate about half an hour of a pre-reformation Lady Mass, with choisters, organ and officers of the church, to be delivered via multiple speakers, which will be even more pretty but also a better tool for telling the place’s story.

With a proposed experiment at Chawton House as an example, we briefly explore narrative structure, extending the concept of story  Kernels and Satellites described by Shires and Cohan (Shires and Cohan, 1988)to imagine the cultural heritage site as a collection of narrative atoms, or Natoms (Hargood, 2012), both physical (spaces, collection) and ephemeral (text, video, music etc.). Music, the presentation concludes is often considered as a “mere” satellite, but with thought and careful design there is no reason why music can not also become the narrative kernals of interpretation.

 

COHEN, A. J. 1998. The Functions of Music in Multimedia: A Cognitive Approach. Fifth International Conference on Music Perception and Cognition. Seoul, Korea: Western Music Research Institute, Seoul National University.

COLLINS, K. 2008. An Introduction to the Participatory and Non-Linear Aspects of Video Games Audio. In: RICHARDSON, J. A. H., S. (ed.) Essays on Sound and Vision. Helsinki: Helsinki University Press.

HAMARI, J., KOIVISTO, J. & SARSA, H. Does Gamification Work? — A Literature Review of Empirical Studies on Gamification.  System Sciences (HICSS), 2014 47th Hawaii International Conference on, 6-9 Jan. 2014 2014. 3025-3034.

HARGOOD, C., JEWELL, M.O. AND MILLARD, D.E. 2012. The Narrative Braid: A Model for Tackling The Narrative Paradox in Adaptive Documentaries. NHT12@HT12. Milwaukee.

LAZARRO, N. 2009. Understand Emotions. In: BATEMAN, C. (ed.) Beyond Game Design: Nine Steps Towards Creating Better Videogames. Boston MA: Course Technology / Cangage Learning.

PANKSEPP, J. A. B., L. 2012. The Archaeology of Mind: Neuroevolutionary origins of human emotions, New York, W. W. Norton & Company.

SHIRES, L. M. & COHAN, S. 1988. Telling Stories : A Theoretical Analysis of Narrative Fiction, Florence, KY, USA, Routledge.

SYLVESTER, T. 2013. Designing Games – A Guide to Engineering Experiences, Sebastolpol, CA, O’Reilly Media.

Chawton Untours and more

It’s a funny feeling time. The calendar pages seem to flicker by as the year rushes towards its end, the the deadlines for various aspects of the Chawton project loom ominously. On one level I worry I have achieved so little and yet, on an other so much has gone on. So it seems inevitable that this post will consist of a number of short catch-ups on various aspects.

Untours

First of all, I’ve got a name for what we offer the public next year. I’d been struggling to think of how I’d present the project to Chawton’s visitors in a way that meant something. I’ve been calling it “the project”, “my experiment” or a “responsive environment”, none of which would sell the concept to potential participants. But a few weeks back I met a colleague who told me about an experimental opening of the Roundhouse in Birmingham. Working with a couple of performance poets, they opened the building for sneek previews that they called “Un-Tours“.

The National Trust’s Un-Tours are not quite the same as what I’m planning of course. But I thought it was a perfect name: visitors will explore the house with a volunteer, but the volunteer won’t be a guide leading them from room to room. They choose where they go, and what they look at, and the volunteer responds to their interests with the relevant natoms. So my volunteers are Unguides, and the tours, Untours (I decided we didn’t need the hyphen). I told my colleague there and then that I was nicking the name.

A collaborator!

The next exciting thing that happened was meeting Ed Holland. Ed is studying Music at Southampton, and was looking for a studio project. He has agreed to help me with the sound natoms. I met him for a second time yesterday, with the always brilliant Jeanice Brooks, and we started to break the musical narrative, focused on domestic life at the turn on the eighteeth/nineteenth centuries, which will reference the Jane Austen connections that Chawton has, without being about her (given there’s a museum dedicated to her just down the road).

Talking about sound

Of course between those two meetings with Ed, I’ve been thinking a lot about sound. As long time readers may be aware, I’m keen to put as few barriers/filters as possible between the visitor and the space they are in. So my preference is always for speakers, but Ed suggested that headphones may offer a more immersive soundscape for less money.

However, one of the key investigations of this project is to investigate a set of “contention rules”, for when more than one visitor/visiting group enter the same space with different story needs. Of course, if everyone were wearing headphones, that soundscape contention wouldn’t be an issue. Which may be a good thing (for visitor experience) as well as a bad one (for my investigation). I’ve also been thinking about other ways my paltry budget might limit what we can achieve. I hope to store all the assets on the web (in Scalar currently) so that a volunteer Unguide can use any smart device to participate (BYD). But of course, that will (I’m thinking – you may know differently) limit each Unguide to delivering just one channel of sound to his/her visitor group. Of course that limits Ed’s ambitions for a multi-channel directional soundscape, but he is making contact with some of the sound guys in our School of Engineering to see if there’s any cool stuff (or speakers) we can use at Chawton.

Assuming we don’t get to borrow anything cool though, I’ve suggested that Ed:

  • Works on a creating a music/sound library based on the lowest spec – single channel a cheap Bluetooth speaker in each room.
  • Specifies the hardware requirements for a system that might deliver his ideal soundscape, either using a multichannel directional speaker system or headphones (Imagine 20 headphone users in the house at the same time). I can guarantee I won’t be able to afford it, but it would be useful research anyway. And we could test a limited version of the concept, with borrowed equipment, during the pilot stage (currently scheduled from the beginning of December in my project plan).

My budget, though tiny, is flexible (it’s my own money) so, I could maybe stretch to something in between the two extremes, if it was something that offered some of the functionality Ed would really want, and maybe had some domestic life afterwards.

Story troubles

The thing that I’ve had most trouble with these last few weeks is the story. I wanted to have at least three narratives – one of the history of the building (and I thought an early 20th century owner, Montague Knight, would be the easiest focus for that); one on Women’s Literature, and the Austen one, mentioned above.

I’n my innocence I thought that I would quickly knock-out an emotionally compelling Montague Knight narrative, but after weeks of reading, arranging and re-arranging, I’ve realised that (duh!) real life stories don’t comply with literary “rules”. Or rather, I’ve realised that maybe my standards, my expectations, for this were too high. I’ve wasted time trying down a rabbit hole, trying to craft a story that I was going to muck up anyway by letting visitors make their choices. I was crafting a traditional guided tour, not an Untour! So, I’ve decided on a different tack. Instead, I’m going to spend some time analysing the natoms I already have, and attributing a story beats to each one. The story should (after all) be procedural.

The outcome of this experiment isn’t (wasn’t ever) meant to be the best interpretive experience. all it is is a step towards the understanding how procedural narratives might work in historic spaces.

Music in Interpretation

Jeanice asked me before Christmas about academic study of how music impacts heritage interpretation. My first response was “there is none” (and I stand by that), but it did make me dig out a couple of papers that I’d found and not included in my literature review. And on reflection, I think I may indeed go back an add one of them in.

The first was Musical Technology and the Interpretation of Heritage, a conference keynote speech given by Keith Swanwick, and published in 2001 by the International Journal of Music Education. That there publication is the clue that this isn’t really about heritage interpretation (as I’m defining it) at all, but rather about cultural transmission through music, and especially through music teaching. I left it in my notes because it references information and digital technology but, re-reading it for Jeanice, I realise it doesn’t do anything with that reference, apart from equating music itself with ICT as a mode of cultural transmission.

There’s some discussion of compositions created with cultural transmission as an intent, which may be interesting for some later study, but it doesn’t give the overview of music as museum/cultural heritage-site media that the title promised.

More interesting is this paper, from the V&A’s on-line research journal. The paper explores the development of a collaborative project between the Royal College of Music and the V&A, involving new recordings of period music for the Medieval & Renaissance Galleries. The first thing that strikes me is that the galleries opened in 2009, and yet the project was conceived in 2002. Sometimes I wish I worked for an organisation that gave such projects a similar about of time to gestate.

Drawing on all their front end evaluation, and the debates on learning styles and segmentation that have taken place over the years, the Medieval & Renaissance Galleries team were keen to offer visitors a “multi-sensory framework […] incorporating opportunities for tactile experience, active hands-on learning and varied strategies for helping visitors to decode medieval and Renaissance art actively.” This included audio as well as film and other digital media.

This is where the quote that, on reflection, I think I should at the very least include in my literature review. It comes from a footnote, and usefully sums up my fruitless search for literature on music in cultural heritage sites:

Music in museums has not been the focus of detailed study or writing.

The article goes on to round up various ways in which music is used in (general, not museums of music) interpretation, for example, places where popular music of the twentieth century is used to help immerse visitors in a particular decade. Of particular interest is The Book of the Dead: Journey to the Afterlife, a British Museum exhibition for which, “…a musical soundtrack was commissioned to heighten emotional effect at a key moment in the exhibition narrative.” I might have to try and find out more about that commission.

The V&A actually included two exhibitions in their music making. While the Medieval & Renaissance Galleries were being developed, the museum ran a temporary exhibition called At Home In Renaissance Italy, for which 24 pieces were recorded and played ambiently in rotation. “Evaluations demonstrated an overwhelmingly positive response to the music from the visitor’s point of view” but also highlighted some of the problems, not least of which was that some people (especially staff who have to hear it non-stop) really don’t like ambient music. This evaluation informed how the music project developed for the permanent gallery.

The plan had been to use pre-existing recordings “that could help visitors to imagine the medieval and Renaissance worlds and to convey emotion and feeling.” But as the curatorial research developed, it became apparent that there were opportunities to use music that hadn’t previously been recorded, but that was directly connected to the objects and stories of the exhibition. Because “evaluation of audio provision in the V&A’s British Galleries demonstrated that audio-tracks were less effective without a strong connection to immediately adjacent objects or displays” the museum decided upon benches equipped with touch-screens and good quality headphones as “audio-points” where a user could sit a browse music related to what they could see in front of them. Each piece faded out after a minute or two, to ensure a reasonable rate of churn of listeners, but the complete pieces were available from the V&A website, for those that wished to listen to them complete.

Sadly the evaluation had too wide a remit to explore in depth visitors’ responses to the music. All they could say was that it “showed that a high percentage of users engage with the audio-points, a strong indication that they are valued by visitors.” I would have liked to have discovered how well the music achieved their aims of conveying emotion and feeling. They do conclude however that “The increasing ownership of smartphones and MP3 players is rapidly increasing the options for museums to deliver music in gallery spaces and the number of ways in which visitors can choose to engage with it.”

So we need to see some more research about how its used an its impact on the visitor experience.

 

Story, Time and Place

This is the Prezi and below are my notes in preparation for a short presentation I gave to a Digital Humanities seminar group at University today. Hosted WordPress still can’t deal with embedded Prezi’s yet so click the link at the start to see the slides. And my notes below are just notes, so you’ll have to imagine me riffing off them to make an entertaining, compelling and coherent (I hope!)  presentation.

The Lindisfarne Gospels is an illuminated manuscript gospel book produced around the year 700 in a monastery off the coast of Northumberland at Lindisfarne and which is now on display in the British Library in London.

Illuminated

Very little structure to the text, no paragraphs etc

In the 10th century an Old English translation of the Gospels was made: a word-for-word gloss inserted between the lines of the Latin text by Aldred, Provost of Chester-le-Street.

This is the oldest extant translation of the Gospels into the English language, and a great example of a reader interacting with the text.

Laurence Stern created one of the first texts to be interacted with. Tristram shandy is epistolary novel, but it’s more than that, sampling other works of literature to bring new meanings.

He chose the format, paper, type and layout of the novel. It’s a book to be played with.

Last year’s Building Stories. Like Tristram Shandy, a story to be played with. Dan Clowes (author) suggest leave bits of it around your own building to chance upon.

Gorge Méliès, regarded as the first person to recognize the potential of narrative film. Goes beyond sequential time/movement and to imaginary places.

Voyage Dans La Lune , special effects, Jump cuts, locations etc started a century of narrative experimentation.

For example music

diegetic music (where musicians are playing in the story, or charcters are listening to the radio for example),
nondiegetic music (where as she says “an orchestra plays as coyboys chase indians upon the desert”) and
metadiegegtic music (where we hear a character “remember” a bit of music).
She also talk about themes, and what Wagner called “motifs or reminisence.”

But despite all this innovation, don’t you find some films “Same-y”?

Not every film has been a success of course. After some test screenings Walt Disney called in “script doctors” to fix The Lion King

Christopher Vogler – Joseph Cambell, Hero’s journey applied to Lion King, then book The Writer’s Journey.

Save the Cat! Blake Snyder’s Beat Sheet – Almost an algorithm for scripting film. 110 pages

Opening Image – page 1 A visual that represents the struggle & tone of the story. Set-up – ten pages Expand on the “before” snapshot. Present the normal world. Including: Theme Stated page 5 – say it “with great power comes great responsibility. Catalyst page 12 – the world turns upside down. Emotional shock. Debate for thirteen pages – Dare our heroes actually explore the new world? Break Into Act Two page 25– The main character makes a choice and the journey begins. B Story begins on page 30— This is when there’s a discussion about the Theme – Timon and Pumba in the Lion King. Fun and games twenty five pages— the action, the roller coaster ride the caper. Midpoint p55 — Success!’ But Bad Guys Close In for twenty pages.bAll is Lost page75 – The opposite of Success. And emotional Nadir.
Dark Night of the Soul for ten pages – woe is me. Hit rock bottom. Break Into Three (page 85) – the B story provides the solution to the A-story. Finale twentyfive pages – This time around, the main character incorporates the Theme – the nugget of truth that now makes sense to them – into their fight for the goal because they have experience from the A Story and context from the B Story. Act Three is about Synthesis! Final Image page 110 – ride into the sunset, a changed character.

Of course the audience have to see each frame of the film in the order in which it is presented. Only the director gets to play with chronology.

Games give back the power to explore the narrative

Procedural narratives versus authored narratives.

Describe RDR, starts off interactive, but delivers fewer and fewer choices towards an inevitable end. Authored, nor procedural. Are procedural stories only in need to great endings?

Non-linear sound in video games

The week before last, I wrote about Annabel Cohen‘s paper on music in video games, and mentioned Karen Collins of Gamesound.com. Collins has written a great deal on games and sound. Her 2007 paper, An Introduction to the Participatory and Non-Linear Aspects of Video Games Audio, from the book Essays on Sound and Vision, seemed a good place to start.

Collins begins by suggesting the subtle difference between the terms “interactive,” “adaptive” and “dynamic”. In her useful set of distinctions “interactive” sounds or music are those that respond to a particular action from the player, and each time the player repeats the action the sound is exactly the same. Citing Whitmore (2003) she argues that “Adaptive” sounds and music are those that respond, not to the actions of the player, but rather to changes occurring in the game (or the game’s world) itself. So “an example is Super Mario Brothers, where the music plays at a steady tempo until the time begins to run out, at which point the tempo doubles.” She goes on go describe “dynamic” audio as being being interactive and/or adaptive.

She also explores the various uses for sound and music in games. She has read Cohen, obviously and so her list is very similar. She quotes Cohen in relation to masking real-world environmental distractions, and in the distinction between the mood-inducing and communicative uses of music. She points out though, that the non-linear nature of game sound means that its more difficult to predict the emotional effects of music (and other sounds). In film, she states, its possible for sounds to have unintended emotional consequences – a director wanting to inform that audience that there is a dog nearby will tell the sound designer to include a dog barking out of shot, but the audience will being their own additional meaning to that sound, based on their previous experiences (which she calls supplementary connotation) . But in games, she argues, where sounds are triggered and combined in relatively unpredictable sequences by player actions, even more additional means are possible.

She also discusses how music can be used to direct the players attention, or to help the player “to identify their whereabouts, in a narrative and in the game.” She points out how “a crucial semiotic role of sound in games is the preparatory functions that it serves, for instance to alert the player to an upcoming event.”

This is something that was made very clear while I played both Red Dead Redemption and Skyrim. Red Dead Redemption would often alert me to an upcoming threat by weaving a more urgent, oppressive tune into the background music. Skyrim took a different approach, the music for Skyrim doesn’t work as hard, but while my cat-creature was sneaking around underground tunnel systems, I was often alerted to potential threats by my enemies muttering to themselves as I approached blind corners. Collins points out that these sorts of cues have occasioned a changing listening style from passive to active listening, among gamers.

Sometimes though, as Collins points out, games are created that put musical choice directly into the players’ hands. The Grand Theft Auto series gives the player a choice of in-car radio stations  to listen too, so that their particular tastes are better catered for. Though they weren’t around at the time of Collin’s writing many iOS and other mobile games have a feature by which the player can turn off game music and even other game sound effects if the so choose, to listen to their own library of music, stored on the device. She even cites the game Vib Ribbon, or the Sony Playstation, which allows the player to load their own music from CDs, and the music then changes the gameplay according the structure of the music the player has loaded.

Collins also discusses the challenges that composers face when writing for games. For a start, Collins points out that “in many games it is unlikely that the player will hear the entire song but instead may hear the first opening segment repeatedly, particularly as they try to learn a new level.” (Though she also points out that many games designers are leaning to include what one composer calls a “bored now switch.” After a number of repeats of the same loop of music, the sound fades to silence, which both informs the player that they should have completed this section by now, and stops them getting annoyed and frustrated by the repetition.

The other main problem is that of transition between different loops (or cues, as she calls them). “Early games tended towards direct splicing and abrupt cutting between cues, though this can feel very jarring on the player.” Even cross-fading two tracks can feel abrupt if it has to be done quickly enough to keep up with game play. So composers have started to write “hundreds of cue fragments for a game, to reduce transition time and to enhance flexibility in music.” This is the approach taken in Red Dead Redemption, where as I move my character around the landscape, individual loops fade in and out according to where I am and what is happening, but layered together they feel (most of the time) like one cohesive bit of music.

Multiplayer games present another problem. “If a game is designed to change cues when a player’s health score reaches a certain critical level, what happens when there are two players, and one has full health and the other is critical?” she asks.

There are rewards too, get the music right, and games publishers can find an additional source of income. She quotes a survey which discovered that “40% of hard-core gamers bought the CD after hearing a song they liked in a video game.” (Ahem, guilty as charged m’lud, even though I’m not a “hard-core gamer.”)

Just before she completes the paper, she has some thoughts on the perception of time too. I’ve noticed a sort of “movie-time” effect in Skyrim, which presents a challenge for my real-world cultural spaces. So I think I might need to look at that in more detail.

Musical interlude

I’ve been on holiday (and heritage free, spending my time bodyboarding, cycling, sea-kayaking and, lest anyone thinks that all sounds too healthy, over-eating in Cornwall) so this blog has been quiet for a week.

But will I was away, a colleague shared a link to a very interesting blog post about London museums creating Spotify playlists to accompany exhibitions.

The writer is conflicted about whether these should be listened too while actually at the exhibition, or before or after a visit. But there’s something interesting here about using music to set the mood, either prior to or at a visit, or when reflecting upon it afterwards.

Music in new media

I’ve been thinking about music again, and staring into the pit of unknown unknowns that is my non-existent understanding of music, except as a casual listener. I know music affects me, and I’ve how important an emotional trigger in the games I’ve been playing for my studies, but I don’t know how or why, and right now I’m wishing I had a degree in Cognitive Psychology to help me understand. (The certificate would sit alongside the degrees in Computer Science, English and History that I don’t have).

Its such a huge subject, but I came across this paper, by Annabel Cohen, which though quite old (1998) I’ve found to be a useful primer. It also led me to the Gamessound website of Dr Karen Collins, Canada Research Chair in Interactive Audio at the Games Institute, the University of Waterloo, Ontario, who has written lots of juicy papers which start where Cohen left off, and are (the clue’s in the URL, a lot more games specific).

Lets start with Cohen though, a sort of new media music 101. She begins from the notion that “music activates independent brain functions that are separable from verbal and visual domains,”  and goes on to define eight functions that music has in new media:

  1. Masking – Just as music was played in the first movie theaters, partly to mask the sound of the projector, so music in new media can be used to mask “distractions produced by the multimedia machinery (hum of disk drive, fan, motor etc) or sounds made by people, as multimedia often occurs in social or public environments.” Apparently lower tones mask higher ones, and listeners filter out incoherent sounds in preference for coherent (musical) sounds . Of course the downside is music can mask speech too when that speech is part of the intended presentation.
  2. Provision of continuity – “Music is sound organised in time, and this organisation helps to connect disparate events in other domains. Thus a break in the music can signal a change in the narrative [I’m reminded of the songs in Red Dead Redemption here] or, conversely, continuous music signals the continuation of the current theme.”
  3. Direction of attention – Cohen has obviously done some experimental research on this function, broadly speaking, patterns in the music can correlate to patterns in the visuals, directing the attention of the user.
  4. Mood induction – ( quick aside here, check out this Mirex wiki page on mood tags for music). I’ve written about this before, and it’s the most obvious function to me, but Cohen is careful to make a distinction between this and the next function, which is:
  5. Communication of meaning – Cohen says “It is important to distinguish between mood induction and communication of meaning by music. Mood induction changes how one is feeling while communication of meaning simply conveys information.” Yet, when she discusses communication of meaning, she uses examples of “emotional meaning: “sadness is conveyed by slow pace, falling contour, low pitch and the minor mode.” I take from this that her nice distinction is between music that makes the user sad, and music that tells the user “this is a sad event” without changing the user’s mood. Hmmm … I’ll have to think about that.
  6. A cue for memory – This is another one that I’ve written about before. Music can trigger a user’s memories from a past event that’s totally unrelated to the new media presentation, if they’ve coincidentally heard the particular piece before, but the effect is more controllable with music especially written for the presentation. The musical term for this (from opera, arguably the first multimedia presentations) is leitmotiv. The power of the music to invoke memories or “prepare the mind for a type of cognitive activity” is well recognized in advertising and sonic brands such as those created for Intel and Nokia.
  7. Arousal and focal attention – “it is a simple fact that when there music, more of the brain is active” Cohen says (without reference). She does on to argue that with more of the brain active, the user is more able to filter out the peripheries of the apparatus running a new media presentation, and concentrate on the diagesis of the presentation, what Pinchbeck calls presence. On the other hand, she admits that some think excess stimulation pulls focus away from central vision and towards the periphery.
  8. Aesthetics – Here we come to what my colleagues report is the biggest issue with using music in interpretation. Cohen says “music is an art form and its presence enhances every situation in much the same way that a beautiful environment enhances the experience of activities within it.” But she admits that aesthetics is subjective, and “music that is not appealing can disturb the user.” Not only that, but some individuals may find all background music difficult to cope with.

So that’s my new media music 101. Next time I’ll look at what Collins has to add.

Unravelling The Vyne

Another short note, this time on a contemporary art exhibition at one of the National Trust place I work with.

I’ve mentioned the Vyne before (in one of my most popular posts). This time, the focus isn’t on Roman rings or Tolkien, but other aspects of the place’s history. Ten artist-makers working in a variety of media have interpreted parts of the Vyne story in especially created works, which are currently on display around the mansion and in its lovely Summer House.

My favorite is this work by Maria Rivens. In the library she has created a piece and literally pulls all sorts of stories out of books similar to the ones in the Vyne’s collection:

Short Cuts and Pop-Ups, by Maria Rivens
Short Cuts and Pop-Ups, by Maria Rivens

A very effective piece is Two Dancers by Charlie Whinney: steam-bent wood (Ash for the male, and Oak for the female) twist and sweep around each other in the Large Drawing room.

IMG_4239[1]

The enigmatic “Mrs Smith”‘s Party Birds doesn’t quite do it for me, though I like it’s anarchic intent. Most of the party birds are raving it up in the Summer House, but some have sneaked into the Saloon with an old wind-up phonograph, which visitors are invited to play heavy shellac records on. The selection is all bird-themed and I chose to play A Nightingale Sang in Barclay Square. I left that record on the turntable, and later when I was elsewhere, I heard it being played again. The sound of it drifting through the open doorways, was somehow more effective than when I was standing by the machine itself. There’s something there I don’t quite understand about music intentionally played and listened to, and that which (as the movies have it) is incidental. I need to ponder on that.

One last lovely piece really needs unpacking. If you go and see the show, do make sure you are there when one of volunteers is demonstrating it. Its a tiny automaton created by John Grayson, which draws an analogy between an incident at the Vyne and last year’s “Plebgate” hoo-har.

Gate Gate by John Grayson, a tiny automaton
Gate Gate by John Grayson, a tiny automaton

I recommend a visit to this show, which is included in the normal price of admission (free to National Trust members). Here’s a link to a video which explains a little more.