Personalisation redux

My external examiner at my viva was Daniella Petrelli, an academic in the field of HCI (Human Computer Interfaces) who I had referenced a few time in my thesis particularly after discovering she was behind a platform to help curators to write the sort of content I had created for Chawton. I found that work too late, after completing the Chawton experiment. Among the “modest” changes that Daniella recommended in my viva is a considerable amount of further reading, including this paper, which to my shame I had not discovered in my literature search, and which would have saved me doing a whole lot of reading and improved my PhD! (Which is of course what a viva is for 🙂)

The paper (ARDISSONO, L., KUFIK, T. & PETRELLI, D.. 2012. Personalization in cultural heritage: the road travelled and the one ahead. User Modeling and UserAdapted Interaction 73 – 99.) Is an incredible useful survey and summary of personalisation techniques employed in cultural heritage up to 2012. I am pretty sure it came out of somebody else’s own PhD literature search. It is very biased of course towards computer enabled personalisation (because it comes out of that discipline) but it looks at 37 separate projects and charts a history of personalisation since the early 90’s. ” when some of the adaptive hypermedia systems looked at museum content (AlFresco (Stock et al., 1993), ILEX (Oberlander et al., 1998)) and tourism (AVANTI (Fink et al., 1998)) as possible application domains” (p7) These early experiments included “a human-machine dialogue on Italian art and combined natural language processing with a hypermedia system connected to a videodisc”, and “automatically generated hypertext pages with text and images taken from material harvested from existing catalogues and transcriptions of conversations with the curator”.

The authors chart the development of web–based interfaces that don’t rely on kiosks or laserdiscs, though WAP (Wireless Application Protocol – which delivered a very basic web service to “dumb” mobile phones) to multi-platform technologies that worked on computers and Personal Digital Assistants. They note two parallel streams of research – “Hypermedia and Virtual Reality threads” adapting the content to the user, and presenting overlays on maps etc. The appearance of PDA’s so personalisation becoming more content aware, with plug in GPS units, but the difficulty of tracking people indoors led to experiments in motion sensing, Petrelli herself was involved in Hyperaudio, wherein “standing for a long time in front of an exhibit indicated interest while leaving before the audio presentation was over was considered as displaying the opposite attitude” (I might need to dig that 2005 paper out, and 1999 paper on HIPS).

There is also an excellent section on the methodologies used for representing information, modelling the user, and matching users and content. When it talks about information for example, it suggests different Hypermedia methodologies, including:

  • “A simple list of objects representing the exhibition as “visit paths” (Kubadji (Bohnert et al., 2008));
  • Text descriptions and “bag of words” representations of the exhibits on display9 (Kubadji and PIL);
  • “Bag of concepts” representations generated by natural language processing techniques to support a concept-based item classification (CHAT (de Gemmis et al., 2008)); and
  • Collection-specific ontologies for the multi-classification of artworks, such as location and culture, and multi-faceted search (Delphi toolkit (Schmitz and Black, 2008))”

The paper also articulates the challenges to heritage institutions wanting to personalise their user experience, including a plethora of technologies and not standards yet reaching critical mass. Tracking users outside (before and after) their heritage experience is another challenge – membership organisations like the National Trust have a distinct advantage in this regard, but have spend most of the decade since this paper was written getting there. Of course heritage visits are made as part fo a group, more than by individuals, and personalisation by definition is about individuals – yet in most of the projects in this survey, the social aspect was not considered. The paper also acknowledges that most of these projects have involved screen based hypermedia while augmented reality and and physical interaction technologies have developed alongside.

Evaluation is a challenge too. In a section on evaluation which I only wish I had read before my project, the paper foreshadows the all the difficulties I encountered. But also says “a good personalization should go unnoticed by the user who becomes aware of it only when something goes wrong.” (p 25) It is reassuring too that the paper concludes “the real issue is to support realistic scenarios – real visitors and users, as individuals and groups in daily interactions with cultural heritage. It is time to collaborate more closely with cultural heritage researchers and institutions” (p27) which is (kind of) what I did. I had better quote that in my corrections and make it look as though I was inspired by this paper all along 🙂.

Storyplaces

Last Sunday I helped out with a trial of Storyplaces, a research project exploring the poetics of location based storytelling. The exploration has two big questions behind it: How do writers change what they do to write locatative text? and, How does experiencing text “on location” affect the reader?

My job for most of the day was to follow and observe readers as they used the stories (which are available as an HTML5 web-app, when you are next in Southampton), to ask them a few qualitative questions and record their answers. But before any volunteer test subjects arrived I got to give a story a go myself.

I chose The Titanic Criminal in Southampton, which took me on a walk from the Tudor House where we were based to the area known as Chapel, where my story started on the site of a working man’s house on Chapel Street. Even before the story started, I was in “storyspace” on the way to the start point. I’m not that familiar with Southampton (apart from the docks) so as I walked I was exploring new spaces. Was it novelty or the idea that a story was about to begin that made everything seem so magical? Or was it the eerily beautiful liturgy sung through the doors of the Greek Orthodox church I passed?


That sound stopped me in my tracks, and I loitered until the verse was finished, but it set up expectations that were ultimately disappointed. I was ready to be blown away by the poetics of space and story, and when I got to the start point, just other other side of a level crossing, even the run-down post industrial scene that greeted me had a certain ephemeral quality as I read the story of the houses that used to stand on this spot.

Then my phone directed me to the next location, The Grapes, a pub on Southampton’s Oxford Street. Storyplaces does not suggest a route, it just shows you the location(s) on a map from OpenStreetMap. So I followed parallel to the railway line a little way, then crossed it over a foot bridge, feeling very much as though I was on a little adventure. The Grapes has a wrought-iron sign dating from the early twentieth century, which the text of the story pointed out. But at this point I came to realise that this particular story sat uncomfortably half-way between an imaginative narrative based on fact and a guided tour of Southampton. My professional interest began to impinge on my enjoyment of the story, and I couldn’t immerse myself any more in the narrative.

And then the story broke. The text offered a link to a video on the BBC website, which failed to play, but succeeded in emptying my browser’s cache, meaning I couldn’t get back to my place in the story. I went back to base to carry on with volunteering.

I was lucky enough to we assigned to observe the writer of one of the stories as she tried out the app for the first time. We talked a little about her process of writing, and translating her imagined experience into the the rules that  the StoryPlaces software uses to deliver the narrative (a process which, we discovered, hadn’t quite done what she had intended). The conversation made me want to give it a go, and to write a least a first draft in situ, as I explore the places that later readers will be lead to by the narrative.

I shall have to ask the David and Charlie, if I can be one of the writers for a future iteration of the project. In fact, I’ve just decided I will write them an email straight away.

 

 

I promise, this is the last time I bang on about @HeritageJam – until next year

The only thing I haven’t covered, since last month’s Heritage Jam event, is the on-line entries, which were more numerous. You can read about them all here (scroll down), but I want to use this last (I promise) Heritage Jam 2015 post to pick out just a few of my favorites.

First up is my award for Most Fun, which goes to Howard WilliamsHeritage Jam: Conserving the Past, an investigation of the actual jams available for sale at heritage sites on his family holiday in Wales. But its not all fruit-spread based humour, he also manages to fit in this specialist subject: the heritage of death, and even the death of heritage.

Howard also contributes to the winning team entry for the on-line competition. This is a shoe-in on my own favorites list because of it’s medium. The Volund Stories: Weyland the Smith is a comic, created by Hannah Kate Sackett. I love comics, and it inspires me to pick up my pencil again and practice drawing. (My problem is that I use a tablet for everything nowadays, and my fingers have forgotten how to control things like pens and pencils.) Only the first few pages were submitted for Heritage Jam, and I eagerly await the completed work, which will be published (free) on both Kate and Howard’s blogs.

The individual winner was also another of my favorites, Cryptoporticus by Anthony Masinton. This is a “first-person walking simulator” (in the style of one of my favorite games Dear Esther) around a mysterious imaginary museum.  To tell the truth, when I saw this (and another which I’ll mention later) appear in the Heritage Jam gallery in the last few hours of wrestling with my own entry, I almost gave up. This looked so brilliant, I thought mine and Cat’s work could not possibly compete.

I only managed to get a few minutes with the actual game during the event itself, but I liked it very much. Sadly the link to download the game on the Heritage Jam page no-longer works. I hope this is only because Anthony is dealing with a couple of bugs he couldn’t manage to fix before the deadline, and the links will eventually work again, because I for one want to have a go playing it right through.

The other entry which almost made me give up my own efforts was the excellent website Epi.Curio, by the appropriately named Katherine Cook. This encourages visitors to interact with the past, and with museum collections, in the multi-sensory sphere of cooking and eating. It’s just such a brilliant idea, presented in a beautiful responsive website. I am overwhelmed and insanely jealous of Katherine’s imagination. (And yes, before you ask, there is a recipe for an actual Heritage Jam.) I haven’t actually tried any of the recipes yet, but I’m thinking about making Pan de Muertos for the end of the month.

Spooky Pan de Muertos from the Epi.Curio website

So that’s a quick whizz through my personal favorites, though there’s plenty more quality stuff in the gallery though, check out Shawn Graham’s Listening to Watling Street, for example. Indeed, there was so much high quality work on show, that wen I submitted mine and Cat’s piece, I was feeling quite subdued, depressed even, despite the amazing etheral quality of Cat’s auralisation. I felt we had worked really hard, but hadn’t come close to some of the showstoppers that were already entered.

So imagine my surprise, and absolute joy, when on my way home from the event, I saw the tweet from Heritage Jam that our piece had been Highly Commended in the judging of the on-line entries. Despite being on the winning team at the in-person event, I was even more excited by this “second place” than that victory. The judges comments were so kind, so I’ll finish with them (and the electronic versions of our certificates).

The breath-taking audio reconstructions included within this complex project captured our judges imaginations and hearts whilst the intricate layering of narrative and interpretive contexts left them wanting more. They were hugely complimentary of the way in which the duo had structured the piece to meaningfully showcase and integrate narrative, reconstruction and data into the piece. The interactive nature of the project promoted significant discussion on the topic of agency, control and interpretation in museums and collections, making it not only a thought provoking piece in its own right, but also in relation to wider heritage themes and issues. The technicality, scale and artful nature of the project, as well as the thoughtful, comprehensive paradata far exceeded the expectations of our judges for a short-term “jam” project, leading them to crown “Among the Ruins” as the highly commended team entry for the 2015 Heritage Jam

HighlyCommended - Online Team

Winner - In-Person Team

 

Versu

A couple of weeks back, I read about “the rise of emotional agents” in the Guardian. One of the games mentioned was Blood and Laurels, a work of interactive fiction (or if you like) a text-based adventure set in ancient Rome. Which seems appropriate as the Portus Project MOOC is running again. That’s said, I’m not convinced its a Rome historians will recognise, the Emperor is “Princeps” which is a pretty generic term, and his predecessor is a fellow called Corretius. Princeps is I think meant to be Nero, which would make Corretius, Claudius. I think I understand reason for the changes – this way, you won’t be tempted to think the the outcome of the interactive fiction is pre-determined by actual history.

I’ve played it through a couple of times now. The first time, as I would any adventure, putting myself into the role and turning out to be a slightly cowardly poet, who just wants everyone to be his friend and not to kill him. Turns out I’m not the only one. I’ve just finished a second playthrough, wherein I tried to be more brash, braver, and a bit of flirt. I should stick to what I know, because this time the story ended prematurely with my character scared in bed. Not quite the satisfying ending of the previous attempt, in which I became Emperor. I’ll try again, and this time, try to make enemies and see how long I survive.

It’s something more than a Choose Your Own Adventure (CYOA). For a start it isn’t as location based as many such stories. The interaction is less based on where you choose to go than on how you choose to interact with other characters.  It’s based on the Versu engine, which is an engine to model social interactions, in interactive fiction. It defines not just what characters (agents) can do, but what they should do, in particular social situations. (Versu’s designer Richard Evans, who worked on The Sims, describes being inspired partly by a situation in The Sims when a Sim invited his boss to dinner, but after letting the boss in, went off to have a bath.)

There’s a lot to read on the Versu site, including this paper, which is the clearest description of how the whole thing works. I’m wondering whether this or possibly Inform 7, from another member of the Versu team, might have an application in cultural heritage sites.

 

Twine

twinegraph

I’ve been toying with Twine . Not like a cat with wool, you understand (though maybe like a cat with wool, because I find it very difficult to leave it alone now I’ve started), but with an open-source tool for telling interactive, nonlinear stories. I’m thinking about using it to create an interactive narrative based around Portus. Inspired by the Honda Type R interactive YouTube ad, I have this idea about the user being able to flick between the present day and one or more periods of the port’s Roman development and decline, while they also get the better idea of how the various spaces connect and relate to one-another. I also have this crazy idea about using it to navigate other student’s creative course work. Which is all very ambitious for someone who knows very little at Twine.

So this week I’ve been learning about Twine. And the best way to learn about it is to play with it. And its fun. It is so much better than HypeDyn, which has a very similar model. It’s so much more intuitive, easier to use and, dammit, prettier. It may turn out not to be quite as functional at HypeDyn, but so far, everything I’ve asked of it has (with only a little Googleing for help) been as easy as pie. What I haven’t yet fully scoped is how procedural it might be. On the surface, it seems everything the player reads has to be written, though it can be shaped at least by variables “if/else” functions.

So, given that I needed to have a structure, a story, in mind to get the most out of my practice, I haven’t started with the Portus Twine. Instead I’ve used a story that I’ve had knowing about in my brain for quite a while. Its a piece of “fanfic” if you will, a story featuring the characters from the little known (but much loved) short-lived TV series, Firefly. Its a story that I’ve told interactively before (frequently in fact), around a table using a variety of Roleplaying Game systems. Players of all sorts have made all sorts of choices, so while I can’t claim to be able to predict everything a player might want to do, I do have a good understanding of the choices they usually want to make. I’ve also discovered that the story can have a number of different, yet satisfying, endings and got a good idea of how the emotional ups and downs of the story feature in the narrative.

I’ve not done it all of course, just the first scene. But I have managed to do something I’ve been wanting to try for some time, and that is let the player’s actions decide who their character is, and thus what their point of view will be for the rest of the story. It’s only a short scene (very short if you are a gung-ho sort of player who jumps in with both feet). Short enough in fact to try multiple times to see who you end up as. Give it a go. tell me what you think of my first attempt.

If you’d like to have a go yourself, this a very easy and useful introduction, and this is a very snazzy presentation. It is notably how the award winning game Depression Quest was created.

Proximity!

20140501-090244.jpg

My Gimbal beacons arrived yesterday. These are three tiny Bluetooth LE devices, not much bigger than the watch battery that powers them. They do very little more than send out a little radio signal that says “I’m me!” twice a second.

There are three very different ways of using them that I can immediately think of:

I’ve just tried leaving one in in each of three different rooms, then walking around the house with the the simple Gimbal manager app on my iPhone. It seems their range is about three meters, and the walls of my house cause some obstruction So with careful placing, they could tell my phone very simply which room it is in. And it could then serve me media like a simple audio tour.

Alternatively, as they are designed like key-fobs, they could be carried around by the user, and interpretive devices in a heritage space could identify that each user as they approach, and serve tailor media to that user. Straight away I’m thinking that a user might for example be assigned a character visiting, say, a house party at Polesden Lacey, the the house could react to the user as though they were they character. Or perhaps the user could identify their particular interests when they start their visit. If they said for example, “I’m particularly interested in art” then they could walk around their a house like Polesden Lacey, and when they pick up a tablet kiosk in one of the rooms, it would serve them details of the art first. Such an application wouldn’t hide the non-art content of course, it would just make it a lower priority so that the art appears at the top of the page. Or more cleverly, the devices around the space could communicate with each other, sharing details of the user’s movements and adapting their offer according to presumed interest. So for example, device a might send a signal saying “User 1x413d just spent a long time standing close to me, so we might presume they are interested in my Chinese porcelain.” Device b might then think to itself (forgive my anthropomorphism) “I shall make the story of the owner’s travels to China the headline of what I serve User 1x413d.”

But the third option and the one I want to experiment with, is this. I distributed my three Gimbals around the perimeter of a single room. Then when I stood by different objects of interest in my room, read of the signal strength I was getting from each beacon. It looks like I should be able to triangulate the signal strengths to map the location of my device within the room to within about a metre, which I think is good enough to identify which object of interest I’m looking at.

What I want to do is create a “simple” proof of concept program that uses the proximity of the three beacons to serve me two narratives, one about the objects I might be looking at, and a second more linear narrative which manages to adapt to the objects I’m by, and which I’ve seen.

I’ve got the tech, now “all” I need to do is learn to code!

Unless anybody wants to help me…?

First words in the Language of New Media

I’ve been reading Lev Manovich’s The Language of New Media.

Or rather I’ve read up to somewhere between pages twelve and eighteen, but its been a fun adventure so far. It’s somehow ironic that a book with the ambition of recording the development of digital media semantics is shackled to such an old medium as the printed and bound book. There’s a copy available from the Winchester School of Art Library, but it always seems to be out and I haven’t had the heart to recall it. I can’t say if one person has held onto it for months, or somebody just checked it out moments before I looked on the web catalogue. And having experienced how it feels to bring a book home from the library, and the next day get a recall notice, and have to post it back, wouldn’t want to put another student through that. I was hoping there would be a e-edition available from the library, a couple of books I’ve wanted to look at have been available that way. But, again somehow ironically, it’s dead tree or nothing.

Or so I thought, but when I checked Amazon I discovered they do have a Kindle edition. Yes, it is more expensive than the paper version bought at another online store, but it does mean I can download a preview onto my iPad.

Reading that preview it’s apparent that Manovich is fully aware of the irony inherent in writing a book about new media. The numbered pages are preceded by a prologue, which Manovich titles Vertov’s Dataset. He explains:

The avant-garde masterpiece Man with a Movie Camera, completed by Russian director Dziga Vertov in 1929, will serve as our guide to the language of new media. This prologue consists of a number of stills from the film. Each still is accompanied by a quote from the text summarising a particular principle of new media. The number in brackets indicates the page from which the quote is taken. The prologue thus acts as a visual index to some of the book’s major ideas.

It’s Manovich’s attempt to create an analogue hypertext user interface, or front-end, for the book. It would have been good if the Kindle edition’s page numbers in brackets were links to the pages themselves, as the numbers in the Contents table are,  but if I want to use the prologue as intended, I shall have to acquire a paper version of the book.

The prologue is enticing though. A glimpse of page 158 says:

Borders between worlds do not have to be erased; different spaces do not have to be matched in perspective, scale and lighting; individual layers can retain their separate identities rather than being merged into a single space; different worlds can clash semantically rather than form a single universe.

He asks (on page 317) “can the loop be a new narrative form appropriate for the computer age?” And on page 322 argues:

Spatial montage represents an alternative to traditional cinematic temporal montage, replacing its traditional sequential mode with a spatial one. Fords assembly line relied on the separation of the production process into sets of simple, repetitive and sequential activities. The same principle made computer programming possible: A computer program breaks a task into a series of elemental operations to be executed one at a time. Cinema followed this logic of industrial production as well. It replaced all other modes of narration with a sequential narrative, an assembly line of shots that appear on the screen one at a time. This type of narrative turned out to be particularly incompatible with the spatial narrative that had played a prominent role in European visual culture for centuries.

This prologue (and the more conventional introduction that made up the rest of the preview) have got me hooked. I’ve ordered a copy, not from Amazon though and not a Kindle edition.  The paper version is available more cheaply, and postage free, from the Book Depository – which itself is, oh irony of ironies, owned by Amazon).

The Narrative Paradox

I’ve had a hectic couple of weeks, which has left me with some catching up to do here. But its been an exciting time too, with lots of connections being made and, slowly but surely, a firmer idea of how I might approach this PhD beginning to appear.

Let me start at the beginning though, with a meeting two weeks ago with colleagues from the university’s English and Computing departments, as well as from  Kings College London and the University of Greenwich. We all of us were coming from different directions but arriving at somewhere approximate to the same place. I probably shouldn’t say too much about it now, after all we’ve got to find a lot of money first.

One thing we talked about though, was the idea of Adaptive Hypertext. This was a new term to me, and may prove to be a useful one. If I understand my colleagues right, it’s a bit like the principle of sculptural hypertext, in that all the content is available, but elements are filtered away based on user preferences, location or previous behaviour. What differentiates it (I think) from plan old sculptural hypertext is that its more dynamic, the sculpting is done on the fly, as the user explores the narrative. Clearly it’s something I need to understand better.

The thing I was most excited by though, was when Charlie Hargood put into words something I’ve been struggling with internally. The thing is, the more interactive a story is, the less good it is. Charlie called this the Narrative Paradox. I hadn’t heard of this term before, so I’ve been searching for its origin. The earliest reference to the term I’ve found so far comes from Ruth Aylett’s 2000 paper, Emergent Narrative, Social Immersion and Storification. She says “The well-known ‘narrative paradox’ of VEs is how to reconcile the needs of the user who is now potentially a participant rather than a spectator with the idea of narrative coherence — that for an experience to count as a story it must have some kind of satisfying structure.” Those quotes she around puts around ‘narrative paradox’ don’t come with an endnote, so though she says its “well known” I can’t find an earlier citation. Aylett may, therefore, have coined the term. If so, she deserves some credit, for her definition is a useful one.

Another of Aylett’s papers, co-written with Sandy Louchart is called Solving the narrative paradox in [Virtual Environments] – lessons from [Role Playing Games]. It got me very excited, not just because I’ve been playing RPGs since 1979, but also because I thought they might already have ‘solved the paradox’, but sadly they discover that “it would be much more difficult to build a computational system able to assess and act on user’s satisfaction levels.”

Engaging RPG experiences occur as a result of conversation, mediated by feedback between participants, just as the best interpretation occurs when people talk to each other. Until cheap open-source computer programmes consistently pass “the Turing test” we haven’t got a hope of building a system that replicates that process.

But I’m not that ambitious. I’m not looking for an emergent narrative created on the fly for the user, but rather an adaptive narrative, handcrafted in advance, with a satisfying structure, but which can adapt to the user’s needs and interests. Charlie’s own paper, The Narrative Braid, is closer to what I’m looking for, and his braid metaphor is useful not just for documentaries, but also for, maybe especially for, cultural heritage interpretation.

The trouble with HypeDyn

Gah! Sculpting Hypertext is harder than it looks!

I’m still struggling with what I thought would be a simple enough exercise to practice using the free hypertext creation tool for non-techy creatives: HypeDyn. You may recall I set myself the task of adapting the draft text for a guide to the River Wey and Godalming Navigations, into a hypertext document. The original text, by Sue Kirkland, reads very well, but its written as though the reader is walking the length of Navigations, upstream. Let me give you an example:

The towpath continues to old Parvis Bridge where the navigation widens to allow barges to turn after loading or unloading.  Built in 1760 and, although much altered over the years, it retains the typical appearance of a late eighteenth century winged brick bridge.  250 years ago the area was full of activity with wharves servicing six mills.  In the mid-nineteenth century James Yeowell, described as grocer, mealman and coal merchant, carried on his business here for many years. Now only the weather-boarded grist mill survives.

Next comes Murray’s Bridge which dates back to the very early days of the navigation and was rebuilt in 1761.  It was across this bridge that the parishioners of Byfleet’s St Mary’s Church would  walk in Victorian times to attend an annual garden party in the grounds of West Hall where local philanthropist, Frederick Stoop, lived.  The red brick country house stands downstream of the bridge on the west bank.  Dodd’s Bridge follows; its footpath leads to West Byfleet.

So the simplest iteration of a Hypertext version which delivers the paragraphs in the correct order  whichever direction the walker is going along the towpath. As you can see from the above two paragraphs, if I were to edit out the very last sentence, referring to Dodd’s Bridge, the paragraphs would work reasonably well, whichever order they came in. 

In sculptural hypertext, where all the nodes (or cards if you prefer that metaphore) are connected to all the others, you use node rules, to hide the connections until certain conditions have been met. In HypeDyn, the easiest way of making the link visible between these two paragraphs would be to create a node rule for each one such as (for Murray’s Bridge): IF NODE “PARVIS BRIDGE” [is] PREVIOUS NODE THEN ENABLE LINKS TO THIS NODE. That would work for people coming upstream, and for those walking in the other direction you’d have a rule on Parvis Bridge like: IF NODE “MURRAY’S BRIDGE” [is] PREVIOUS NODE THEN ENABLE LINKS TO THIS NODE.

All well and good, and if my ambition was simply to create a Hypertext of a walk of either direction along the Navigations, I’d be done by now. But I wanted to be cleverer than that. Jill has written a great introduction that tells the story of the Navigations, from their creation in the seventeenth century to their acquisition by the National Trust. I’m looking about how story works in space, so I want to have a go at not telling that story all in one lump, as the guidebook would, but to experiment with telling it along the walk, in a dynamic way, so that however far you were walking, would have to opportunity to read the whole story, but if you were walking past the right places, certain parts of the story would be triggered by particular places, as well as by what you’d already read.

I also wanted to make the text more dynamic, so that I didn’t have to edit out lines like “Dodd’s Bridge follows…” but could instead choose to show them only if people were walking in the right direction, or even show alternative text when people were walking in opposite direction.

This second challenge is easier to solve. In sculptural hypertext, the ability to create links on each node is made pretty much redundant by the facts that all nodes are linked to all the others unless the links are sculpted away by the node rules. But HypeDyn allows the author to use the link function to create alternative text, that only appears when certain conditions are met. When there is no destination set, the additional text doesn’t look like a link to the reader.

So for example, you could include the text about Dodds’s bridge in the Murray’s bridge node, but make it a link which you can only see is you are walking upstream from Parvis Bridge. For for those walking downstream, the sentence would be replaced by a blank space.

 

You can also set “Facts” in the node rules. There are two types of Fact. The first is a simple True/False flag. The second is a “text fact” which can be used to set the alternative text for the links on other cards. Sadly that’s all it can be used for. I spent an hour or more yesterday creating Text Facts what I was going to use in the rule conditions for displaying the “story” nodes among the “place” nodes. Only after I’d done all that work did I try and set a rule using a Text Fact. And that’s when I discovered you could only use the True/False Facts in rule conditions.

You’ll guess from my post title, that when I started to write this, I was ready to rant at the limitations and inadequacies of HypeDyn as a tool. And for a chunk of the day today I’ve been moping over the demise of HyperCard.  But HyperCard was a pretty expensive programme (even at the student price I got when I bought it) and HypeDyn is free. And actually (as writing this post has made me realize) the trouble with HypeDyn is my own technique – I should have tested my idea out on a couple of nodes, rather than waste time setting up the Facts for all the nodes. Something about bad workers and tools comes to mind.

Still its been a learning experience (even if someone else would have learned the same lesson in less hours) and that’s what I’m here for, so I can’t complain.

A bridge over the River Wey Navigation
A bridge over the River Wey Navigation

Ripping text into Hypertext

I’ve spent the day engaged in a first-pass edit of a proposed guidebook text into HypeDyn. The text is the 10,000 word draft by Sue Kirkland of a guide to the River Wey and Godalming Navigations. Though this is National Trust site, its not an official project, I’m doing it as a “real-world” exercise in using HypeDyn.

So far I’ve cut the text up into about seventy “nodes”, most of which are associated with actual places along the river. There are also eight that are pure “story” elements, and a few others are are about things or people. A few “transistions” have also become apparent. The text as it stood envisaged a twenty mile walk from the Thames to Godalming – so so I thought, for most of the day. This puzzled me, as the Navigations are a favourite place for my family to walk, but we’ve never considered walking it all in one. (Well, my wife probably has, but the rest of us a far more fairweather.) And even if we were, I thought, why would we start at the Thames? Surely it would be more pleasant to walk downstream?

The “one way” nature of of the proposed text was the reason why I’d thought it might be fun to turn into Hypertext in the first place. If I managed no more that making it readable in two directions, that would be a useful enough thing to do in any case. So while I was editing I was thinking about the walks my family had taken, some upstream some down, and I still couldn’t work out why the original author had chosen to start at the Thames. It only dawned on me as I neared the end – the navigations aren’t only for walkers, obviously. Lots of pleasure-boat owners and hirers use the waterway too. Many are local with their boats moored somewhere along the river, but most visiting craft would have come via the Thames. Doh!

So, when i start my next task, turning it into a context based Hypertext, I won’t just have to think about walks starting at (for the sake of my sanity) the four sites with the best car parking, but also boats coming form the Thames (that should be easy of course because that’s how the original was written) the two points where other waterways join the Navigations. Actually its one other point right now – the Wey and Arun canal is not yet fully restored.

So at either end, there is only one direction of travel, but at the other three (or four) points, the visitor will have a choice to go up or downstream, and the language of the text will have to change to cope with the choices the visitor makes. I also want the text to tell most of the “story” elements to the visitor, even if they have the shortest, four mile, walk.

That’s all for another day though.

I took a phone call today from a friend of a friend who is possibly being offered a high-powered job with a global cultural heritage brand. We talked about that company and its competitors, and where the future might go. And for the first time I used the words “Ambient Interpretation.” I know exactly where I got the word Ambient from, but I’m not telling you, not yet. And not tomorrow, but next week.