#CAAUK and Fragmented Narrative

I’m posting this from Day Two of the CAAUK conference. I think there may be more of relevance to my studies today, though yesterday was by no means disappointing. There were many thought provoking points made, and I got to meet more of my fellow Southampton students than I’ve met so far while actually at University.
I also heard yesterday that my seminar in Thursday might be streamed to York University’s Cultural Heritage students. Argh! Scary. I was already thinking it wasn’t brilliant. But with this news, I resolved to rewrite it. I might have got away with being a bit crap to my Soton cohort, but now I feel I’m representing the university, and having to prove the they were right to take me on in the first place!
However, the train journey gave me the opportunity to think about where my previous version of the presentation had gone wrong.
And the answer was, I was trying to tell my story chronologically. I fallen into this trap because this seminar is a bit of a “this is me” moment, and well as “this is what I’m interested in.” But since one thing I’m interested in is narrative, I should be a bit cleverer at turning story into narrative for this presentation.
So I’ve gone back to the drawing board (or iPad) to mind-map what I what I wanted to talk about. And I’ve already seen a better starting point, and realised that two story elements can be presented in parallel rather than sequentially.
So my presentation will become a “fragmented” narrative. Which is interesting because I’m thinking a lot about fragmented narratives currently. I’ve realised for example that the storytelling in Red Dead Redemption isn’t a sophisticated algorithm as I’d hoped, but rather an engagingly fragmented narrative, that only occasionally reminds the player that they may have gone “out of sequence.”
But more of that in another post when I’ve finished playing it. In the meantime I’ve been enjoying another fragmented narrative: Chris Ware’s Building Stories.

20130223-133229.jpg
This is a comic. It tells the story of building and the people who live in it, including the owner and landlady, a young couple falling apart, another couple starting a family and … a bee. All these intertwining stories are told across a number of mini-comics in a number if formats, including ones resembling newspapers, an architect’s portfolio, “Tijuana bibles” as we’ll as more mundane softcover comics and hardcover albums.

20130223-133917.jpg

They all come in a box, and there’s no indication of what order you should read them in, you dive into these characters’ lives at whatever point takes your fancy. For example, I’ve already seen the bee’s death. But his life is likely to be the last two comics I choose to read, as all the others are more attractive and accessible (to me). The back of the box even suggests scattering the various comics around your home (and makes suggestions are to where particular ones might go), so despite being an essentially linear medium, these comics have potential to tell stories around social spaces. Which is sort of what I’m meant to be studying.
Now though, lunchtime at the CAAUK conference is coming to an end, and the most interesting bit (for me) is about to start.

Southampton Tudor House (and more RTI)

I can’t read for a PhD in digital technology and cultural heritage interpretation at Southampton and not visit the recently reopened Tudor House Museum, which touts some of the very latest interpretation technology. So with my daughter on an inset day from school, I thought this would be the ideal opportunity for an educational visit.

We parked at the West Quay shopping centre, and skipped across the road to Bugle Street (what a great name). A very reasonable price for entry, even though I only got 10% for my MA membership. We took the audio tour wands and started with the AV presentation in the banqueting hall. Lily said later that this was her favourite bit, and I can see why: the room it atmospherically lit with wobbly-wick candle and a “crackling fire” in the hearth. Then, over the spitting of the logs, we hear whispers. Somewhere, somewhere close, people are shushing each-other, then talking, about us. Fairy-trails sparkles across the walls and curtains, and all of a sudden, the fire and candles are blown out. My daughter, almost twelve, was impressively spooked, especially by the rat she heard in the dark, scuttling under the bench.

And the Timeweaver introduces himself, and the spirits of the house, as friends and guides. They give us give us a short tape/slide presentation, a potted history of the house and its early 20th century “restoration,” being careful to point out the features in this room, for example, the musicians gallery, which are inventions of that restoration. Then the spirits are banished, the great curtain thrown open and the room is bathed in daylight. We are invited to explore the house, audio wands in hand.

The audio guides direct us to the garden first, to see the remains of “King John’s palace” (which turns out to have nothing to do with King John) and to admire the the Tudor style garden, before heading back into the domestic service areas of the house. Its worth pointing out here that the audio guide has a jarring change of tone here. An authoritive female voice guides us round the garden, presenting us with “facts,” but when we return to the house the Timeweaver greets us again and with the help of his spirits revealed the house with more of a storyteller’s style. I thought my audio tour had switched to the children’s commentry, but my daughter said the authoritative woman was her guide round the gardens too.

Inside the house we also met the museum’s “state of the the art technology” GuidA Rotate units from Blackbox-av. These touch-screen panels gave us computer generated models of the room in which they were sited at various points in history. Their USP being that they can be rotated around the room, so you always have both the relevant bit of the real world and the computer simulation in front of you. What struck me first were the differences between the “now” model, and the reality in front of me.

Note arch at floor level
Note arch at floor levelIMG_3613[1]

In every era the model also features “hot spots” which you can touch for a layer of extra interpretation. Sometimes this is directly related to the feature you are looking at, but sometimes the interpretation seemed more generic. Touch a barrel of … what? … salt? for example, and you get a photo the fascade when the house was occupied by dyers, but you still don’t know what was in the barrel.

Hotspots

Click on the barrel and get an historic photo
Click on the barrel and get an historic photo

A nice touch in the the first room is a screen mounted on the wall above the GuidA Rotate, so others in the room can see what the person controlling the device is looking at. And I loved the inclusion of a model of how the room might have looked in the late twentieth century, when it was a museum education room. It means something special when education sessions become part of the historic record.

Later on there’s another GuidA Rotate in a lavish bedroom, showing how the room might have looked in the Tudor, Georgian and Victorian periods. But the same model also features on the one of the lenticular panels which are also a feature of the interpretation.

This clever use of an old technology amused me more than the (I’m guessing more expensive and less reliable) GuidA Rotate units.

But what amused me the most was the temporary exhibition, which did something I’ve always wanted to do – challenge the values of museum collecting, with the more personal (and more modern) collections of local people. And what I especially wanted to do happens here: a comics collection is featured!

A comics collection

When we passed a wall in which faint graffiti had been scratched I tried to tell my daughter about the day I spent at Winchester using RTI photgraphy to make clearer imaged of similar graffiti. She wasn’t that interested, but on the other side of the wall are interactive units that allow visitors to look at clearer images of the graffiti – I bet my university colleagues (and their string and shiny balls) have already been involved …

Talking of which, I got an email today with a link to some of the images we created on that day. So I’ll update my post to include it.

Is this the best we can do?

A week or two back, a colleague gave me a sample of the QR code panels that are being piloted along the South Downs Way.

20130214-143515.jpg

I was quite excited to see it, because it turned out not to be just a QR code, but also incorporated an NFC chip and a LAYAR augmented reality image.

I’m quite dismissive of QR codes, but only because some people get over excited about what is, after all, just another way of inputting a URL into a browser. I keep telling my colleagues that a QR code is only as exciting as the website it points to.

But the addition of an NFC chip and the Augmented Reality suggested that a lot more thought had been put into this pilot than some QR codes I’ve seen.

So, I’ve been playing with it, and I’m disappointed. My phone doesn’t have NFC, so I couldn’t try that. But I could download the LAYAR app and have a go with that.

It took a few goes to get LAYAR to recognise the image, but eventually it said “Getting Content”. Then it said “Point at the page again to view LAYAR content” so I obeyed, and …

nothing.

Ho-hum. So I resorted to scanning the QR code. The Scan app quickly recognised the QR, and served up … this:

20130214-141455.jpg

Oh dear. I’d sort of expected a page formatted for mobile devices, not one I’d have to “pinch to zoom” to read. And, more importantly, I really had expected to be taken to a page that told me about the the South Downs way, not a link to a survey about using QR codes.

To be fair, the little red buttons at the top do link to various places along the South Downs Way, but I had expected each QR code to take me to information about a specific place. Click on one of the buttons and this is what you get:

20130214-142345.jpg

Yawn.

So lessons learned: Format for mobile if you are providing links to web content in the countryside (duh)! Survey your users after they’ve experienced the content. And build the engaging and dynamic web content before you install the QR panels. Oh, what’s that? You did? Well, you and I must have a different understanding of “engaging and dynamic” my friend.

And I might have shared my thoughts with the developers via their handy and prominent survey, had not all the questions been variations on a) “Let me count the ways in which QR codes are splendid” or b) “I’ve never heard of QR codes”.

All in all, I think National Trails have been sold a pup.

Story and Narrative, Games and Culture

Maybe I’m on the wrong path.

Perhaps I’m on a deserted Scottish Island, just like the one in Dear Esther, wandering down a path that is going to come, in time, to a dead end, I won’t be able to climb the rocks, or I will slip down a cliff and find myself on the path I should have taken.

Or am I distracted by a side-quest? Perhaps this is an entertaining branch off the rhizome, an investigation that might prove character building, but that doesn’t get me any further along the real story.

My stated intention was to discover what cultural heritage institutions could learn from games developers about narrative, and whether the way story is applied to the virtual spaces of games had any relevance to telling stories in the three-dimensional spaces of the real world. But today I found my way to a 2003 paper by Henry Jenkins, which kicked off with three quotes that stood like three threshold guardians in my way:

“Interactivity is almost the opposite of narrative; narrative flows under the direction of the author, while interactivity depends on the player for motive power” –Ernest Adams

“Computer games are not narratives….Rather the narrative tends to be isolated from or even work against the computer-game-ness of the game.” –Jesper Juul

“Outside academic theory people are usually excellent at making distinctions between narrative, drama and games. If I throw a ball at you I don’t expect you to drop it and wait until it starts telling stories.”
–Markku Eskelinen

Even though Jenkins makes a case for story and narrative in (some) games he warns against “something on the order of a choose-your-own adventure book, a form noted for its lifelessness and mechanical exposition rather than enthralling entertainment, thematic sophistication, or character complexity.” And with some shame I realise I’ve already used the words “choose-your-own adventure” in this very blog.

Perhaps I should turn back, and find the main path.

And yet, Jenkins goes on to tempt me further down the path, when he introduces “spatiality – and argue[s] for an understanding of game designers less as storytellers and more as narrative architects.” He even goes on to explore how story and space work together in the virtual environment:

“a story is less a temporal structure than a body of information. The author of a film or a book has a high degree of control over when and if we receive specific bits of information, but a game designer can somewhat control the narrational process by distributing the information across the game space. Within an open-ended and exploratory narrative structure like a game, essential narrative information must be redundantly presented across a range of spaces and artifacts, since one can not assume the player will necessarily locate or recognize the significance of any given element. Game designers have developed a variety of kludges which allow them to prompt players or steer them towards narratively salient spaces. Yet, this is no different from the ways that redundancy is built into a television soap opera, where the assumption is that a certain number of viewers are apt to miss any given episode, or even in classical Hollywood narrative, where the law of three suggests that any essential plot point needs to be communicated in at least three ways.”

Writing in 2003, Jenkins admitted that “the player’s participation poses a potential threat to the narrative construction, where-as the hard rails of the plotting can overly constrain the ‘freedom, power, self-expression’ associated with interactivity” but since then many games have been critically acclaimed for balancing narrative and interactivity, not least the game that started me on this train of thought, Red Dead Redemption (2010) on which one reviewer said “Rockstar has claimed that that Red Dead Redemption is the ultimate sandbox game – and dozens of hours in, we can’t help but agree. It’s immersive, engrossing and superbly addictive. In fact, this review almost didn’t happen at all; we were too busy playing cowboys.”

And Googling for Jenkins’ paper also led me to Games and Culture, a journal of interactive media. I’ve already pulled a number of articles out of that publication that I want to read. So maybe this path isn’t as barren as I was beginning to suspect.

One thing I have noticed in the literature already is a sight tendency to conflate or confuse “story” and “narrative.” It is easy to do, and I’m guilty of it myself, especially in conversation. The word I use often depends on who I’m talking to – if I want to appear plain speaking and unpretentious, for example, I’ll use “story.” The the two words do mean different things and I must be more disciplined in using them. So from hereon in (and comment if you catch me not following these definitions in this blog), “story” is the sequence of events as they occur chronologically, and “narrative” is the way the story is told. So flashbacks, for example are a narrative tool, and this blog is the narrative tool which I’m using to record the story of my research.

I fear I might have been an unreliable narrator, I might have given the impression that I have actually played Red Dead Redemption. I have not. On the path ahead of me I see, not just the papers I’ve downloaded from Games and Culture, but a horse and a six-gun.

A man’s gotta do what a man’s gotta do.

Today I read Galloway (so you don’t have to)

“There are very few books on new media worth reading.” (Galloway. A, 2012, 1)

My supervisors said something very similar in our first meeting, but despite that assertion, exploring the university library last week, I came across The Interface Effect (2012 Alexander R Galloway). It looked interesting, so I took it out. As the blurb says “Grounded in philosophy and cultural theory and driven by close readings  of video games, software, television, painting and other images, Galloway seeks to explain the logic of digital culture through an analysis of its most emblematic and ubiquitous manifestation – the interface.”

Firstly, I’m not convinced it was written as a book. Each of the six chapters (four “chapter”s actually – but an introduction and a postscript) feels as though it was written as a separate article, and only minimally edited to make the collection connect (arguably) into a single narrative. Galloway also demonstrates a mild case of “philosopher’s itch” – preferring not to use an English word when a Greek one is available. For example, when discussing what anyone else would call the “forth wall” (he is referring to this picture, from which Mad mascot Alfred E. Neuman peeks at the reader) he has to use the words “orthogonally outwards” (p38)

He also asks one of the same questions I’m researching: “Are games fundamentally about play or about narrative?” (p23) which makes his refusal to suggest and answer somewhat frustrating.

Galloway does use the book to make some good points though, for example, from the preface (pvii) “Interfaces are not things, but rather processes that effect a result of whatever kind” and “culture is history in representational form.” He expands upon interfaces are processes idea on page 18 “What if we refuse to embark from the premise of “technical media” and instead begin from the perspective of their supposed predicates: storing, transmitting and processing? With the Verbal nouns at the helm, a new set of possibilities appears. These are modes of mediation, not media per se.”

I also like the passage in which Galloway distinguishes between the cinema and ICT  “In effect, the cinema forces us to don the Ring of Gyges,  making the self an invisible half-participant in the world… the computer is an anti-Ring of Gyges. The scenario is inverted. The wearer of the ring is free to roam around in plain sight, while the world, invisible, retreats in absolute alterity. The world no longer indicates to us what it is. We indicate ourselves to it, and in doing so, the world materialises in our image.” (pp11 – 13) Cinema, he says, is an ontology (the branch of metaphysics dealing with a the nature of being) “while the computer is, in general, an ethic… I make the distinction between an ethic, which describes the general principles for practice, and the realm of the ethical… And this is the interface effect again only in different language: the computer is not an object, or a creator of objects, it is a process or active threshold mediating between two states.” (pp22-23) This makes even more sense when he quotes, on page 32, Francois Dagognet’s assertion that “the interface… consists essetially of of an area of choice. It both separates mixes the two worlds that meet together there, that run into  it. It becomes a fertile nexus.” (Dagognet. F, 1982 Faces, Surfaces, and Interfaces Paris, Librairie Philosphique J. Vrin)

Galloway takes a look at World of Warcraft, and in doing so teaches me more Greek. Look at the screen when someone is playing and you’ll see two things. The player’s character walks around the world, fights etc. in the diegetic space. Diegesis is, I discover, the correct term for the world in which action takes place in any work of fiction. In WoW, there’s also a “nondiegetic space… The thin, two dimensional overlay containing icons text, progress bars, and numbers.” (p42) He returns to the game later to ask “Why do games have Races and Classes? (p129) I might have told him the answer to that one – its because most of them are based on Dungeons and Dragons, the grand-daddy of pen and paper Role Playing Game created in the seventies. D&D was a development Chainmail, a medieval wargame played with toy figures, and races and classes were created to be a shortcut for players to differentiate their “characters” from the uniform rank and file soldiers that had previously populated the game-world (which I guess I should call the diegesis). Many tabletop roleplaying gamers quickly grew out of using such archetypes, and systems like Generic Universal RolePlaying System, allow far more flexible and nuanced character creation. That said, much of the fiction written in the genre still features racial archetypes, so even GURPS still allows you to create an Elven Archer if you so desire. Galloway recognises that the Races and Classes in WoW are algorithmic short-cuts, just as they were in Dungeons and Dragons, and points out that “race is static and universal while class is variable and learned… What this means means is that race is ‘unplayable’ in any conventional, for all the tangible details of gamic race (voice. visage, character animation, racial abilities, etc.) are quarantined into certain hardcoded machinic behaviors.” (pp131-2) This isn’t how even D&D was played, racial abilities (bonuses) might have been hard coded, but with the player’s imagination providing the rest of “the tangible details of gamic race” anything was possible. Pretty much anything is possible in the virtual world Second Life too, but WoWs enduring popularity over that diegesis (! I better not wear this new word out) mirrors D&D’s enduring popularity over a myriad of more customisable pen and paper RPG systems. Galloway sees this a problem, and so it may be, but is it really analogous (as he suggests on page 133) to the disgraceful  “blackface minstrel” that was Jar Jar Binks?  Or is it just because using race and class archetypes is less work for the players than creating a rounded character from scratch?

Galloway addresses work too, the the idea of the Chinese Gold Farmer as his springboard. “Recall the narrative again, that somewhere off in another land are a legions of Chinese gamers, working in near sweatshop conditions, playing games to earn real cash for virtual objects.” (p135) He argues that whatever the truth behind the narrative, we are all in fact gold farmers, “it is impossible to differentiate cleanly between play and work… The new consumer titans Google or Amazon are the masters in this domain. No longer simply a blogger, someone performs the necessary labor of knitting networks together. No longer simply a consumer, browsing through links on an e-commerce site, someone is offloading his or her tastes and proclivities into a data-mining database with each click and scroll. No longer simply keeping up with email correspondence, someone is presiding over the creation and maintenance of codified social relationships. Each and every day, anyone plugged into a network is performing hour after hour of unpaid micro labor.”

The closest we get to an examination of narrative within the digital domain is not his examination of WoW, but rather an interpretation of the successful TV thriller, 24. His assertion that the  interrogation scenes are “merely the technique for information retrieval. The body is the database, torture a query algorithm” (p112) is amusing but there’s more meat for me to explore when he moves away from the TV to look again at cinema. He describes films like Robert Altman’s Nashville and Short Cuts, among others, as “the visual and narrative equivalent of graph theory and social network theory” (p117).

There are a few other lines which I want to think about more. For example: “A tension remains between software, which I suggest is fundamentally a machine, and ideology, which is generally understood as narrative of some sort of other… narrative cannot exist in code as such, but must be simulated, either as a “narrative” flow function governing specific semantic elements, or as an “image” of elements in relation as in the case of an array or database.”

I find myself beginning to make connections between different things that I’m reading. Galloway throws a line away “the more intuitive a device becomes, the more it risks falling out of media althogether” (p25) which I think resonates with what Pinchbeck was aiming for in his work on presence.

I’m also interested in Galloway’s reference to “Manovich’s argument … about the waning of temporal montage, and the rise of spacial montage,” (5) so I’m going to seek out at least one more book on new media, The Language of New Media (Manovich. L, 2001).

So what exactly is RTI anyway? (Updated)

Remember that bit at the beginning of The Fifth Element, when the professor is trying to read the ancient pictograms and the sleepy boy keeps letting the mirror drop? Turns out what that professor needed was RTI.

I spent a fun day today working alongside volunteer guides at Winchester Cathedral. But we were not giving tours. We were taking pictures of graffiti. But with some of these scratches in stone and wood hundreds of years old, they could be difficult to read. That’s where RTI comes in. James Miles, whose PhD work involves a lot of different ways of recording data about Winchester Cathedral asked for volunteers to help capture data about the graffiti using RTI methodology.

RTI stands for Reflectance Transformation Imaging. It involves a camera, a flashgun, a shiny black or red sphere (snooker or billiard balls are good), and a piece of string. Lets say you wanted to look at graffiti like this:

IMG_3563

This is your object. What you are doing to do is light it from a variety of different angles and take a photo of it each time. You’ll want no less that 24 different angles/photos, and the more you have the better the results – until you get to about 80.

In the end, you’ll import all these photos into a bit of software, which will create a composite image, within which you can virtually “move” the light source around to get the very best angle for which part of the image you are looking at. You can manipulate the image in other ways too, which will (in this case) make the graffiti more readable. But to do all that the software needs to know exactly where you lit the object from in each photo, and that’s where the billiard ball comes in. So first of all, lets mount the billiard ball on a tripod near the object and of course position the camera so that it frames both what you want to photograph and the sphere:

IMG_3567

From now on, none of these three can move. The object has of course been there for hundreds of years, but its your responsibility not to knock either tripod (as I knocked the one with the sphere, the first time I tried this). To make it easy on the computer you’ll be using later, the flash has got to be the same distance from the centre of the object in every photo you take. This is where the string comes in. Use a length of string three times the width of the object to measure the distance of the flash from the object each time you reposition it. This way you are creating virtual dome of lights around the object.

IMG_3572

At this point you’ll want to take a few experimental snaps to get an optimum combination of  flash-power, shutter speed and f-stop. The camera is, of course, in fully manual mode, you don’t want it changing things after you’ve set it up.

Happy with your set up you’ll start taking photos. But you don’t want to touch the camera, so it’s best you use one of those remote control doo-dads that triggers both flash and camera. Our team included one to hold the flash one to hold the other end of the string, these two also have to concentrate on avoiding camera, ball and tripods, so we had a third volunteer to trigger the camera from a safe distance (when the string is safely out of the way), and in this case a fourth to hold a white piece of paper behind the ball, which would otherwise be lost in shadow:

IMG_3571 IMG_3574

Each time you shoot, move the flash methodically around the object. Most of us decided this meant starting at the top, and moving the flash down a few degrees each time, then starting another vertical column a few degrees round to the right. We had to take care not to light the object from any position where ball, camera or tripod would cast a shadow on the object. And remember, move either ball or camera, and you have to start again…

Next its back to the computer. Here’s trainer, Hembo, showing us how to process the data:
IMG_3583

This is where the billiard ball comes in. The first stage of creating the image is to show the computer where to look for the ball. The software identifies the ball in each picture, and pinpoints where on the ball’s surface the flash is reflected. Then for each pixel in that picture (and here I might be oversimplifying what was explained) the software “deletes” any light that isn’t coming from exactly the same direction. Then, with some clever maths, it puts all the data left from all the pictures together in a file that you can manipulate in another bit of software (the viewer). Here’s that being demonstrated:

IMG_3582

So, that’s what RTI is.

Some of the manipulated images from the graffiti that we recorded will be online soon. And when they are, I’ll post a link. UPDATE: here’s the link, scroll down and click on the pictures: they’ll reveal the enhanced version. I was involved in Hembo’s group.