Iterative development

Today I am looking at a paper about a Augmented Reality (AR) project at the Svevo Museum in Italy. TH AR part of the project interests me less than their methodology. As the the authors themselves conclude, AR is a young technology and at the moment the tools for developing the AR experience are mostly in the hands of technologists “could prevent the successful development of experiences focused on content rather than on technology which are capable of attracting diverse categories of users.”

The paper is: Fenu, Cristina & Pittarello, Fabio. 2018. Svevo tour: The design and the experimentation of an augmented reality application for engaging visitors of a literary museum. International Journal of Human-Computer Studies 114: 20-35. doi: 10.1016/j.ijhcs.2018.01.009.

The Svevo Museum is dedicated to the works and life of writer Italo Svevo, and being a literary museum is has many aesthetic challenges similar to the Chawton library where I did my experiment. Bu aesthetic challenges I mean that the appreciation of literature is not the same as the appreciation of the visual arts. Essentially literature is immaterial, even ephemeral – books, even old books essentially being containers for the work, not the work itself. Even if they are “only” containers, they are valuable, and more fragile than many museum collections and providing the access totem that visitors might expect can be an issue. This may have influenced their idea to use AR as part fo the experience, though they point out that “The Svevo Tour is, to our knowledge, one of the first AR projects conceived for a literary museum (AR-literary-museum-issue) and one of the challenges was to use these techniques for engaging its visitors.” That challenge made more acute by the nature of most of their visitors – adults and seniors who are often educators.

There’s not much that’s particularly quotable in this paper (remember I am doing all this reading for the required “modest” corrections on my thesis, but given the nature of the site, and the iterative approach to development. I might well reference this paper as an example of the sort of rapid prototyping that museum professionals can repurposing “off the shelf” software – Wikitude in the case of this project, Scalar for mine.

One thing I do like is that they so call Microsoft Hololens “AR”, not Mixed Reality. see my rant about that here.

Studying immersion in virtual tourism

I am less enamoured of the next paper that my expert examiner recommended: Raptis, George E., Fidas, Christos & Avouris, Nikolaos. 2018. Effects of mixed-reality on players’ behaviour and immersion in a cultural tourism game: A cognitive processing perspective. International Journal of Human-Computer Studies 114: 69-79. doi: 10.1016/j.ijhcs.2018.02.003.

This paper describes an attempt to measure attention in a “mixed reality” environment, and hypothesis the impact of such an environment on players of a cultural tourism game. I was hoping that it would be a useful attempt to do the sort of big-budget work I had originally intend to do in my studies – tracking user attention in a cultural-heritage environment with both persistent physical natoms (narrative atoms), and more ephemeral natoms (sound, light and other digital interventions). But although it uses the sort of technology to teach attention that I had hoped to find budget for (in this case Tobii Pro Glasses 2 gaze sampling system) I compares the users reactions to a game that is available PC (i.e. screen based) and also on Microsoft holo-lens. Now Hololens is market by Microsoft as a “mixed reality” system but I am not convinced it is. It is a reasonably sophisticated augmented reality system, but all it does is overlay the user’s environment with an image projected onto the goggle of the headset that they wear. Yes, it models the physical environment reasonably well, so that (when I had a chance to use it) I could “put” a virtual archeological model of a ship on a table then walk around the table to look at the ship from different angles. But I could not interact with the virtual by manipulating the physical. I have seen better “mixed reality” with an x-box and a sandpit.

The game used in this study is a case in point. Holotour, described as “a playful audiovisual three-dimensional virtual tourism application [that] transforms users to travellers, allowing them to see and explore virtual reality environments and experience physical places in space and time without physically travelling there” can be used on a screen or on hololens. It does not involve physical reality at all. It’s a very simple point and click adventure game with the object of collecting hidden objects and adding them to your inventory. The only difference between the on-screen version and the hololens one (as far as I can ascertain from this paper) is whether you use a mouse and cursor to point and click, or or your finger, held up in the field of vision of your goggles. So its not as useful as I had hoped, not tracking visitors’ attention around a physical site.

(This sin’t to say its not a useful paper to somebody – after all, virtual tourism might be all we can do in these covid times.)

I did learn something new (to me) in this paper however a model of cognitive style (or preference – see previous rants about learning styles) called Field Dependence-Independence (FD-I). “FD-I style is a single-
dimension model which measures the ability of an individual to extract information in visually complex scenes.” It may not be as new to me as I think – I recall reading a book, or chapter in a Conceptual Development book, during my first degree (thirty years ago) by (I think) Susan Greenfield about how some people (generally younger and games literate) were better able to follow the story in Hill Street Blues, because that drama was one of the first to feature multiple stories happening on the screen at the same time. I don’t recall her mentioning FD-I but it kind of sounds like the same thing. anyhow “FD individuals tend to prefer a holistic way when processing visual information and have difficulties in identifying details in complex visual scenes. On the other hand, FI individuals tend to prefer an analytical information processing approach, pay attention to details, and easily separate simple structures from the surrounding visual context.” I wonder which I am (from my failure to take in all the info on a game’s screen I am guessing FD.

Heritage Soundscapes

At my viva my external examiner pointed me towards this interesting paper, which she had co-authored – partly, I think, as an example of how I should restructure the discussion of my Chawton experiment in my thesis. But it contains some real gems ( like “the museums studies literature points out the restorative value of an aesthetic experience that is clear of any information acquisition or learning objective and is centred instead on the sensorial experience of being there”) that makes me regret missing it in my literature review: Marshall, M. , PETRELLI, D., DULAKE, N., NOT, E., MARCHESONI, M., TRENTI, E. & PISETTI, A.. 2015. Audio-based narratives for the trenches of World War I : intertwining stories, places and interaction for an evocative experience. International Journal of Human-Computer Studies 27-39.

It’s a case study of a prototype “visitor­ aware personalised multi­point auditory narrative system that automatically plays sounds and stories depending on a combination of features such as physical location, visitor proximity and visitor preferences” Voices from the Trenches for a First World War exhibition at the Museo Storico Italiano della Guerra in Italy. What particularly interest me is that its part of the Mesch project which has some other outcomes which I refer to in my thesis. The paper describes their intent to move away from what they call “the information­ centric approach of cultural heritage.” I am sure a number of my professional colleagues would bridle somewhat at this accusation. After all, did not Tilden tell us in the 50’s that interpretation was more than mere information? But one od the things that my Chawton experiment uncovered was that actually too much “interpretation” turns out to be mere information after all.

The authors summarise previous experiments in responsive soundscapes, such as LISTEN, which “composes a soundscape of music and/or commentaries depending on the detected visitor’s behaviour: visitors that are not close or are moving are classified as unfocussed and for them a soundscape is created, while visitors that are standing still and close to the artwork are classified as focussed and a narrative (e.g. the curator describing the artwork) is played over the headphones.” Though many soundscapes are delivered by headphone, to avoid sound pollution for other visitors, the interesting project SottoVoce is designed around eavesdropping on what other people in our party are listening to. Half the respondents (in groups of two) heard the soundscape from each others phone speakers, while the other half had headphones. “When in loudspeaker mode visitors focussed on what was displayed on the screen of the mobile device and stayed close to the sound source while partners linked via the same audio on their headphones had a more dynamic visit driven by each other’s interest in the exhibits.”

“The ability to convey and evoke emotion is a fundamental aspect of sound” they say, and explain “The affective power of voice and audio storytelling has been recognised as creating a connection to the listener and is even amplified when spoken words are not coupled with the visual capture of the storyteller, creating a sense of intimacy and affective engagement.” An they built their soundscapes using the same sort of mix of music, speech and other sounds that I used (in a limited fashion) at Chawton. Some of the primary source material was recorded to sound more like oral history, with actors reading the words “with palpable emotion” to be more affective. The responsiveness is similar to that of LISTEN, but the “staying still” metric isn’t used, instead a simpler proximity method is used. woven into that soundscape are voice recordings for attentive listening, which is selected by the visitor choosing from a selection of cards. The sound was delivered by loudspeakers but, unlike SottoVoce, not on people’s own devices, rather places around the site. This was what I did for Chawton UNtours too.

The particular challenge with this project was that it was outdoors.The difficulties of maintaining equipment, connecting power and data etc means that most sites resort to delivering via mobile device. But on the other hand: “While engagement in a museum tends to be via prolonged observation, in an outdoor setting multiple senses are stimulated: there is the physical, full­body experience of being there, the sight and the sound of the surroundings, possibly the smell too. The multi-sensory setting places the visitor in direct connection with the heritage and enables engagement at an emotional, affective level rather than at a pure informative level.” (p6) The danger of using a mobile device to deliver interpretation is one I wrote about here, but essentially it stake them out of the where they are, it is the antithesis off presence.

With all this in mind the designers of the project set out five clear principles:

  • To engage at multiple levels, not just cognitive
  • To focus the visitors’ attention on the heritage, not the technology
  • To deal with group dynamics sensibly
  • To be provocative and surprise visitors, but design simple and straightforward interactions
  • To personalize content on the basis of clear conditions

The choice of sound over anything screen-based was an outcome of the second principle. Loudspeakers rather than headphones was also an attempt to focus attention on the heritage: “During a small experiment in a local outdoor heritage site, we observed that audio creates a wider attraction zone where passers­by become aware of the sound source, and a closer engagement zone around the emitting point where one has to stop and listen in order to understand what the voice says.”

So they designed a soundscape that featured music nd sound to attract visitor to a location and then vice recording to hold them there. The narratives are arranged thematically, with different voices (authoritative and intimate) indicating the nature of the content. Quite how the visitor chooses is not really made clear but I expect it is by approaching the voices that most attract them.

The team trialed the idea by observing the visitors behaviour using about 23 minutes of content, but I was disappointed that they did not come up with any solutions to the problems we encounter trying to evaluate the soundscape at The Vyne. It is hard to observe and distinguish between active listening and background listening. The authors seen to assume that if the active listening content is playing, then the partiocilapants are actively listening. The only evidence they have for this is a qualitative questionnaire, which I am not convinced is an accurate measure on engagment. Yes they said they enjoyed an benefitted from the experience, but if they did not know that was what was being tested, what proportion would have even mentioned the soundscape.

Of course they identified a number of challenges, not least fine-tuning the volume to be loud enough to attract attention and yet not so loud to cause discomfort. This is especially true of the different voices, with some by necessity quieter and more intimate. Of course they also predicted issues overs scalability – similar to the ones I planned fro but wasn’t able to properly test at Chawton “how well would such a system work in a busy environment with many groups interacting.”

Personalisation redux

My external examiner at my viva was Daniella Petrelli, an academic in the field of HCI (Human Computer Interfaces) who I had referenced a few time in my thesis particularly after discovering she was behind a platform to help curators to write the sort of content I had created for Chawton. I found that work too late, after completing the Chawton experiment. Among the “modest” changes that Daniella recommended in my viva is a considerable amount of further reading, including this paper, which to my shame I had not discovered in my literature search, and which would have saved me doing a whole lot of reading and improved my PhD! (Which is of course what a viva is for 🙂)

The paper (ARDISSONO, L., KUFIK, T. & PETRELLI, D.. 2012. Personalization in cultural heritage: the road travelled and the one ahead. User Modeling and UserAdapted Interaction 73 – 99.) Is an incredible useful survey and summary of personalisation techniques employed in cultural heritage up to 2012. I am pretty sure it came out of somebody else’s own PhD literature search. It is very biased of course towards computer enabled personalisation (because it comes out of that discipline) but it looks at 37 separate projects and charts a history of personalisation since the early 90’s. ” when some of the adaptive hypermedia systems looked at museum content (AlFresco (Stock et al., 1993), ILEX (Oberlander et al., 1998)) and tourism (AVANTI (Fink et al., 1998)) as possible application domains” (p7) These early experiments included “a human-machine dialogue on Italian art and combined natural language processing with a hypermedia system connected to a videodisc”, and “automatically generated hypertext pages with text and images taken from material harvested from existing catalogues and transcriptions of conversations with the curator”.

The authors chart the development of web–based interfaces that don’t rely on kiosks or laserdiscs, though WAP (Wireless Application Protocol – which delivered a very basic web service to “dumb” mobile phones) to multi-platform technologies that worked on computers and Personal Digital Assistants. They note two parallel streams of research – “Hypermedia and Virtual Reality threads” adapting the content to the user, and presenting overlays on maps etc. The appearance of PDA’s so personalisation becoming more content aware, with plug in GPS units, but the difficulty of tracking people indoors led to experiments in motion sensing, Petrelli herself was involved in Hyperaudio, wherein “standing for a long time in front of an exhibit indicated interest while leaving before the audio presentation was over was considered as displaying the opposite attitude” (I might need to dig that 2005 paper out, and 1999 paper on HIPS).

There is also an excellent section on the methodologies used for representing information, modelling the user, and matching users and content. When it talks about information for example, it suggests different Hypermedia methodologies, including:

  • “A simple list of objects representing the exhibition as “visit paths” (Kubadji (Bohnert et al., 2008));
  • Text descriptions and “bag of words” representations of the exhibits on display9 (Kubadji and PIL);
  • “Bag of concepts” representations generated by natural language processing techniques to support a concept-based item classification (CHAT (de Gemmis et al., 2008)); and
  • Collection-specific ontologies for the multi-classification of artworks, such as location and culture, and multi-faceted search (Delphi toolkit (Schmitz and Black, 2008))”

The paper also articulates the challenges to heritage institutions wanting to personalise their user experience, including a plethora of technologies and not standards yet reaching critical mass. Tracking users outside (before and after) their heritage experience is another challenge – membership organisations like the National Trust have a distinct advantage in this regard, but have spend most of the decade since this paper was written getting there. Of course heritage visits are made as part fo a group, more than by individuals, and personalisation by definition is about individuals – yet in most of the projects in this survey, the social aspect was not considered. The paper also acknowledges that most of these projects have involved screen based hypermedia while augmented reality and and physical interaction technologies have developed alongside.

Evaluation is a challenge too. In a section on evaluation which I only wish I had read before my project, the paper foreshadows the all the difficulties I encountered. But also says “a good personalization should go unnoticed by the user who becomes aware of it only when something goes wrong.” (p 25) It is reassuring too that the paper concludes “the real issue is to support realistic scenarios – real visitors and users, as individuals and groups in daily interactions with cultural heritage. It is time to collaborate more closely with cultural heritage researchers and institutions” (p27) which is (kind of) what I did. I had better quote that in my corrections and make it look as though I was inspired by this paper all along 🙂.

First impressions of Hololens

A couple of weeks back, I had my first experience with Microsoft’s Hololens. The university acquired a number of units to experiment with. My archaeology colleague Pat Tanner has been trying one out and showed me and Learning expert Sarah Fielding progress so far. Pat is a traditional shipwright by trade and PhD student, exploring the archaeological evidence of boat building techniques. Some of the results of his work is available here.

To work with Hololens, Pat had learned the basics of Unity 3d, so that he could place a relatively simple model he’d already made “into” the Hololens. That’s the model we were looking at.

another representation of Pat Tanner's model
The ship model that Pat demonstrated in Hololens (obviously this is a poster presentation, not what you see in the augmented reality). (c) Pat Tanner

Wearing the unit was more comfortable that I expected. It’s lighter than I imagined, and the weight distribution is better than the front heavy VR units that I’ve tried. That’s not to say that it isn’t still a little front heavy, but it is not as tiresome as Vive or Oculus Rift. Talking of which, it has one huge benefit over VR devices, I can wear it for more than a minute without it making me nauseous. This is a huge benefit for me, as I normally can’t explorer the wonders of VR properly.

One of the reasons of the lack of nausea is the fact that the objects you see are in the real space. It’s Augmented Reality, not Virtual Reality. (Microsoft insist on calling it Mixed Meality, but I’m not convinced they are right to do so. When it interacts with physical objects, then maybe.) So, Pat creates an entirely blank stage in Unity 3D which in virtual space overlays the meat space room we are in. Then me imports his model and positions it on, say the table in front of us. Of course the table doesn’t exist in the Unity 3D world. There, the ship model is just floating in space, but wearing the hololens, you can see it, ghostly, but because of the stereoscopic vision, it has the illusion of some “weight”. You can turn away from it, walk away from it, or walk closer, bend down to peer inside. The bulwarks of the ship are no barrier to you. You can push your head through the sides of the ship to look at the lower decks.

Or, stepping back, you can your gestures to “grab” the model and move it about. A camera system in the headset looks out for these gestures, and raising a finger in your eyeline places a box around your object, with handles at each node. Once you’ve move the model to where you want it, a pinch motion on top of those nodes lets you grab that and, depending on which node you have hold of, you can spin the model around or scale it.  This interface seems a bit clunky as though, in the absence of a new paradigm for AR gesture interfaces, we’ve fallen back on what worked with mouse controls on 3D modelling on screen. I’m sure, as more people develop AR devices and applications, a new gesture paradigm will arise.

But there may be another reason why using Hololens was more comfortable than VR units. And that reason may be something that I found a little disappointing. I was expecting my vision to be filled with the augmented world. As it was, what I saw was a “letterbox”, with the models sliding “off screen” not into the periphery of my vision, if I turned my head. That letterbox effect was initially exaggerated, before I realised that the head-band itself was obscuring the throw of the projection, blanking out the top third of the image. Adjusting the headset to push the band itself further back, and lowering the lens back down in front of my eyes, gave me a slightly taller letter box, but still a letter-box. Now I don’t know if this was a limitation of the hardware, or of the Unity 3D set-up that Pat was using, but I must admit to being a little disappointed in that aspect (…ratio 🙂 – did you see what I did there?)

Overall though, I was impressed. Can I see heritage places equipping their visitors with hololens to overlay what’s there today with what might have been? Not at these prices, but I am sure that this sort of AR has a brighter future than those VR headsets.

 

Pokémon Big Heritage event, Chester

It had to happen, and Big Heritage stepped up to the plate and made it happen. Tomorrow and Sunday, there will be a Pokémon Big Heritage event around the streets of Chester.

Part of Chester’s Heritage Festival, but officially in partnership with Niantic, the creators of Pokémon Go, the event was brought to my attention via the Pokemon Go app. Chester Castle will be open to the public for the first time, and there will be re-enactors a-plenty there, but there will also be Pokestops and Pokegyms. There are also two paper-based trails: a Pokémon Pastport that you can get stamped at four (currently secret, to be revealed on the day) locations; and, a ten question quiz trail that you’ll need the help of the app to solve.

Big Heritage may have been canny in approaching Niantic for an event this weekend, and it’s the first anniversary of the launch of Pokémon Go. Would Niantic be so willing to support similar events in the future at different times of the year?

My family are cast to the three corners of the country that aren’t near Chester this weekend, so I won’t be able to go. But I’ll try and drop Big Heritage a line, and see if they’ll share their evaluation. 2400 Facebook users have said that they are planning to attend. Are they all from Chester? Or are any of the travelling? Of course Niantic will know exactly where everyone comes from 😉

Abstract: Digital Personalisation for Heritage Consumers

I’m speaking at the upcoming Academy of Marketing E-Marketing SIG Symposium: ‘Exploring the digital customer experience:  Smart devices, automation and augmentation’ on May 23 2017. This is what I wrote for my abstract:

Relevance to Call: Provocation, Smart Devices. Augmentation of the Customer Experience

Objective: A work-in-progress research development project at Chawton House explores narrative structure, extending the concept of story Kernels and Satellites to imagine the cultural heritage site as a collection of narrative atoms, or Natoms, both physical (spaces, collection) and ephemeral (text, video, music etc.). Can we use story-gaming techniques and digital mobile technology to help physical and ephemeral natoms interact in a way that escapes the confines of the device’s screen?

Overview: This provocation reviews the place of mobile and location technologies in the heritage market. Digital technology and social media are in the process of transforming the way that the days out market is attracted to cultural heritage places. But on site, the transformation is yet to start. New digital interventions in the heritage product have not caught on with the majority of heritage consumers. The presentation will survey the current state of digital heritage interpretation and especially the use location-aware technologies such as Bluetooth LE, NFC, or GPS. Most such systems deliver interpretation media to the device itself, over the air or via a prior app download. We explore some of the barriers to the use of mobile devices in the heritage visit – the reluctance to download proprietary apps, mobile signal and wifi complexities and most importantly, the “presence antithesis” the danger that the screen of the device becomes a window that confines and limits the user’s sensation of being in the place and among the objects that they have come to see. Also, while attempts to harness mobile technology in the heritage visit display interpretation that is both more relevant, and in some cases more personalised to the needs of the user, they also tend towards a “narrative paradox” – the more the media is tailored to the movements of the user around the site, the less coherent and engaging the narrative becomes.

Method: Story-games can show us how to create an experience that balances interactivity and engaging story, giving the user complete freedom of movement around the site while delivering the kernels of the narrative in an emotionally engaging order. At Chawton we plan to “wizard of oz” an adaptive narrative narrative for that place’s visitors.

Findings: Work so far demonstrates that a primary challenge for an automated system will be negotiating the contended needs of different groups and individuals within the same space. The work at Chawton looks to address this.

**

This is the first time I’ve written an abstract in this format, and I found it quite a challenge. What you add in and leave out is always a difficult decision, and this format, which was limited to one side, had me opting to leave out the references which I might have made room for if I had not had to write something under each of the prescribed headings. It’s also the first time I have had formal feedback on an abstract, which I share below:

Relevance to call: Good fit Smart devices, user experience,
augmentation, culture (5)
Objective: A practical case example of augmentation in a
heritage setting (5)
Lit rev: No indication of theory used, as this is a practical
case study (n/a)
Method: A specific case of Chawton House presented. (5)
Results: Interesting findings re barriers to use of mobile
devices in heritage, and the experience evaluation (4)
Generalisations: Interesting and original context of heritage
institution using augmentation, can extend to
other heritage sector applications. (4)
Total 23/25

**

So, not a bad score, but I wonder what I would have got (out of 30?) if I had included the references. Does the bibliography count within the one page limit? Or, could I have included it on a second side?

Still, not time for those questions. I have the write the actual presentation now. 🙂

Heritage managers, you need to be thinking about Pokemon Go

I don’t normally post on Wednesdays, but I am driven to write tonight, because something is happening that seems to be an actual phenomenon. Pokemon Go, the locatative game from Niantic, using IP from Nintendo keeps breaking records. It is apparently already the biggest mobile game ever in the US. Not just the biggest locatative game, this game is bigger than Candy Crush.

Long-time readers may remember the post I wrote introducing some research into attitudes to locatative gaming. I’d run an internet survey pushed towards gamers from all around the world. At the time, the biggest locatative game around was Niantic’s Ingress. I’d asked everyone what they knew of a list of different digital games. I’d got about 220 responses. 178 respondents had never even heard of Ingress, which was at the time “taking the world by storm”. A site called Android Headlines said that. Let me tell you AH (I can call you AH can I?), you don’t know storms.

Another post on that same survey concluded “I can’t yet claim from this research, that the world is ready and waiting for locatative games.”

Well maybe it is now.

What does that mean for heritage sites? Well, I don’t think it means heritage organisations should rush out their own AR scavenger hunts. But it does mean that people are already using your sites to play games. A few weeks ago, a team member from one of the places I’m currently spending time at for work told me about a security alert. In the middle of the night they went to investigate and found three people who had broken into the gardens. The people explained that they were there to take control of an Ingress portal.

Heritage locations are already, without their knowledge, Ingress portals. They are very likely already “Pokestops” too. This may be a problem for some sites’ spirit of place. Its already being seen as an opportunity. [EDIT: This article on what you can do if you find that your place is a Pokestop is also interesting.] I bet there already many more Pokemon Go players in the UK than there are players of Ingress, and it hasn’t even been released in this country yet.

Its happening. Its big, very big. Heritage Managers, you need to be thinking about this.

Walking around looking at stuff

Image from Aalto University, Media Lab Helsinki

A few weeks ago, I was presenting my work to a group of my supervisor’s Masters students. I joined in on the preceding seminar session, during which they talked about a number of experiments in digital interpretation in museums.

One thing that struck me about many of the experiments was that they each required the museum visitor to use a new interface. Some were simpler interfaces that others. One involved shining a torch, another was planned to involve gestures to navigate a reconstruction of a sunken ship. This second interface, a Vrouw Maria exhibit at a Finnish maritime museum, challenged users who “would not understand what they were expected to do or, when they could start the navigation, problems that were accentuated by the tracking system, which was not completely reliable at that point. […] The navigation itself was not error free either: people had difficulty stopping the motion and steering up or down. In addition, it was hard to hit the info spots without running past or through them. Again, tweaking the parameters of the gestural interface was needed. Pointing around for 10 minutes or more with the arm extended started to get tiring—something that cannot be completely solved if the input is so heavily based on pointing.” (REUNANEN et al, 2015). The discussion made me think about, not just these experimental interfaces, but pretty much every museum interactive kiosk or app created since digital technology arrived on the scene.

To a lesser or greater extent all these technologies involve museum visitors having to learn a new interface to access data. Some may prove easier than others to learn, but all of them are different, all of them need to be learned. Which makes accessing the data just one step more difficult. On the other hand there is a generic interface which museum, gallery and heritage site visitors learn (it seems, for most individual) in early childhood. The default museum interface is:

Walking around and looking at stuff

… as I said to a colleague yesterday. (Well actually I said “walking around and looking at shit,” but I meant shit in the most inoffensive way. And though I’d dearly have loved to headline a blog post with this more colloquial version, I’m mindful of my curatorial  and conservation colleagues, and I don’t want them to feel I’m demeaning our collections.)

What prompted me to write about it today was the news yesterday that Dear Esther is to be re-released for  the Playstation 4 and X-Box One. Dear Esther is “credited” with kicking off a genre of games known as “walking simulators” or “first person strollers”, and criticised by many gamers as not being a game because (among other things) there is no challenge (unless you count interpreting the enigmatic story that your simulated walk reveals).

I’m reminded of Gallagher’s (2012) observation (in the brilliantly titled No Sex Please, We Are Finite State Machines) that “Video games are unique in the field of consumer software in that they intentionally resist their users, establishing barriers between the operator and their goal.” This contrasts somewhat with what Nick Pelling (who coined the term Gamification as I discussed last week) said about game interfaces “making hard things easy, expressive, near-effortless to use.” So which is it? Are game interfaces easy or difficult? Juul and Norton give a pretty conclusive answer: its both.

“Games differ from productivity software in that games are free to make easy or difficult the different elements of a game. While much may be learned from usability methods about the design of game interfaces, and while many video games certainly have badly-designed interfaces, it is crucial to remember that games are both efficient and inefficient, both easy and difficult, and that the easiest interface is not necessarily the most entertaining.”

The team behind that Vrouw Maria experiment had considered making users mime swimming for the gestural interface, but they rules it out because it was “engaging but at the same time socially awkward in front of an audience.” What they ended up with was an interface that was neither efficient, nor entertaining. While it may indeed have been socially awkward for many, the swimming gesture control would have been very entertaining. Their final decision indicates that they considered the transmission of data the more important purpose of the exhibit.

Last week I discussed how gamification is most often used as a way of motivating behaviour: drive more efficiently, take more exercise. “Explore more” is something many museums and heritage sites wish for their visitors. An interface that is challenging but entertaining may well motivate more exploration. But there is an alternative.

Dear Esther is arguably not a game, because its interface (basically Walking Around Looking a Shit Stuff) is too easy. Yet it’s designers would argue that it is a game, just that uses story as a motivator rather than challenge. For museums and heritage sites, where Walking Around Looking at Stuff has long been the default interface Dear Esther might offer a model for digital storytelling that motivates more exploration.

This is what I’m trying to achieve with my responsive environment: Digital content., compelling stories, that are accessed by Walking Around and Looking at Stuff.

Cultural Agents

I’ve been reading Eric Champion’s Critical Gaming: Interactive history and virtual heritage. Eric asked his publishers to send me a review copy, but none was forthcoming, and I can’t wait for the library to get hold of a copy – I think I was to quote it in a paper I’m proposing –  so I splashed out on the Kindle edition. I think of it as a late birthday present to myself, and I’m not disappointed.

One thing that has struck me so far is a little thing (its a word Champion uses only three times) but it seems so useful I’m surprised it isn’t used more widely, especially in the heritage interpretation context. That word is “multimodality”. As Wikipedia says (today at least) “Multimodality describes communication practices in terms of the textual, aural, linguistic, spatial, and visual resources – or modes – used to compose messages.” But its not just about multimedia, “mode” involves social and cultural making of meaning as well. Champion says:

Multimodality can help to provide multiple narratives and different types of evidence. Narrative fragments can be threaded and buried through an environment, coaxing people to explore, reflect and integrate their personal exploration into what they have uncovered.

Which is surely what all curated cultural heritage spaces are trying to achieve, isn’t it? (Some with more success than others, I’ll admit.) Champion is referring to the multimodality of games and virtual environments, but it strikes me that museums and heritage sites are inherently multi-modal.

It sent me off looking for specific references to multimodality in museums and heritage sites, and indeed, I found a few, this working paper for example, and this blog, but there are not many.

But I digress. I’ve started Eric’s book with Chapter 8 (all the best readers start in the middle) Intelligent Agents, Drama and Cinematic Narrative, in which he examines various pre-digital theories of drama (Aristotle’s Poetics, Propp’s Formalism (with a nod in the direction of Bartle and Yee) and Campbell’s monomyth), before crunching the gears to explore decidedly-digital intelligent agents as dramatic characters. Along the way, he touches upon “storyspaces” – the virtual worlds of games which are by necessity incomplete, yet create an illusion of completeness.

His argument is that there is a need for what he calls “Cultural Agents” representing, recognising, adding to, or transmitting cultural behaviours. Such agents would be programmed to demonstrate the “correct cultural behaviors given specific event or situations” and recognise correct (and incorrect!) cultural behaviours. For example, I’m imagining here characters in an Elizabethan game that greet you or other agents in the game with a bow of the correct depth for each other’s relative ranks, and admonishes you if (in a virtual reality sim) you don’t bow low enough when the Queen walks by.

Which leads on to what he calls the “Cultural Turing Test […] in order to satisfy the NPCs [non-player characters] that the players is a ‘local’, the player has to satisfy questions and perform like the actual local characters (the scripted NPCs). Hence, the player has to observe and mimic these artificial agents for fear of being discovered.” (As he points out, this is in fact a reversal of the Turing test.)

Then he shifts gear again to look at Machinema (the creation of short films using game engines, which I learned about back in Rochester) as a method for users to reflect on their experience in-game, and edit it into an interpretation of the culture the game was designed to explore. Its a worthy suggestion, and could be excellent practice in formal learning, but I fear it undermines the game-play itself, if it becomes a requirement of the player to edit their virtual experiences before comprehending them as a coherent narrative.

Also in all though, I can already see that the book will be an enjoyable and rewarding read.