Are we all cyborgs? Digital media and social networking

Continuing with my reading of Staiff’s Re-imagining Heritage Interpretation, I come to his chapter on digital media and social networking. He wastes little time on those who “persist wit the idea that podcast audio-tours and GIS-activated commentries care just extentions of ‘old’ ways on interpreting material culture, but simply using digital techniques,” rather (writing in 2014, remember) Staiff is a champion of “Web 2.0 as all it emblematically stands for”:

Web  2.0 and the generation of of users who inhabit this experience […] are not interested in pre-packaged information that is passively received; rather they want open access to databases so that they as visitors can share the content and be co-authors of  the interpretation. The digital-savvy wants to be a creator of meaning as well as a consumer of meaning.

I shared a similar optimism when he was writing, but I am less convinced now. Yes, visitors to cultural heritage do share their experiences on social media, but they are not yet demanding access to databases to share that content and their own interpretation. Or at least not many are, despite the prevalence of smartphones, and our seeming inability to let go of them. The majority of visitors that I (and others) observe do not use their devices on site. It’s worth mentioning that back in 2014, he also saw 3G wireless as more of a  gamechanging technology than it turned out to be. Even 4G speeds haven’t enabled mass use of the internet on-site in heritage. Recently a colleague spoke hopefully that fifth generation wireless technology might finally get people using mobile devices more on site. We shall see, but I remain unconvinced.

But digital interpretation does not need to take place on-site. Staiff writes enthusiastically of a student’s response to a digital heritage interpretation assignment he regularly hands out. He describes how “Gabriel” chose his/her ancestral home town, Sienna, and started off creating an inventory of all the information s/he could find on the web about it, including Wikipedia and Youtube, official civic sites and personal blogs by both tourists and residents. Then, says Staiff “Gabriel” built an interactive website that allowed visitors to mash up the content he had sourced, and add to it. “Gabriel” built the code, but didn’t control the content “what emerged was a ceaseless interaction between fellow classmates, his/her family, and friends. It is impossible to describe in words the way this digital creation worked out or what it included because what stood out changed, at any point in time, as did the conversations and contributions.”

Staiff lists some of the things that caught his eye, representative of the dynamic and user generated nature of the site, and that list includes, for example:

  • A grandmother’s reflection about growing up in the contrade
  • a recipe for panforte
  • a poem about a beloved aunt who lived in Sienna
  • a friend’s university essay on Ducio’s Maesta; and,
  • a link to a video game Assassin’s Creed

… among many other things. Apparently the site “is a special place/space in Gabriel’s family with contributions both the Sienese side of the family and and Sydney side of the family.”

Which all sounds wonderful, in the new media mode of Manovich, something more than the sum of its parts, created by its users. Here, heritage is not simply an object or place that you look at, but (Staiff cites Laurajane Smith’s Uses of Heritage) something you do, a verb rather than a noun. Garbriel’s website is a utopian interpretation of the city.

Utopian in its truest sense, because it doesn’t exist.

Gabriel is in fact, a “hypothetical student” and the website Staiff describes “is the work of a number of students over several years […] merged together to form a composite example”, which is a pity because it sounds fun. Now, any one of Staiff’s students may have produced a site as dynamic, as comprehensive and as well supported by its users, but somehow I think not. I have written before about the critical mass of users that heritage specific social media sites need to be dynamic. I have also written about the luxury of time available to digital creators/curators, that very many people simply don’t have. The students that constituted “Gabriel” were given an assignment, given time to create their work. The majority of social media users are necessarily more passive. These are concerns that I think Staiff shares:

“In the digital world, who is participating, who gets to speak, are all speaking positions valid in relation to cultural places, objects and practices, who is listening/viewing, who is responding and why, what are the power relations involved here, do marginal voices continue to be sidelined, what about offensive and politically unpalatable commentary?”

But it can not be denied that there is truth at the heart of Staiff’s argument. Much more is being researched, written, drawn, filmed and in other ways created about the heritage than can possibly be curated by the traditional gatekeepers – museums, trust’s, agencies and their staff. Staiff acknowledges “the anxiety about who controls the authoritative knowledge associated with heritage places” but counters that “What is needed is a complete rethink and conceptualization of the role of heritage places in the digital age and to see the technological devices used by visitors, not as ‘things’ separate from the carrier, but as ‘organic’ and constitutive parts of the embodied spatial, social and aesthetic experience.”

 

Designing another survey

I’ve been mulling it for weeks, but I’ve decided that I need to get some more data. So I’m preparing another survey, to be promulgated via the internet. It’s going to be asking cultural heritage visitors about their use of mobile devices around heritage sites. I got a pretty good sample size last time, so I hope I’ll get a similar response this time.

Though I feel that my social networks might be more likely to fill this one in, I’m curious to see how it compares to the one that was overtly about gaming. I don’t want to wonder whether there are more gamers than museum visitors in the world… 🙂

Actually though, I am going to include a couple of questions about mobile gaming. I want to see if certain attitudes have changed in the three years(!) since that survey. I expect to see more people (even museum visitors) aware of location based gaming after the Pokemon Go phenomenon. So I’ll have two questions based upon (but updated) a couple from that survey.

The main purpose of the survey though is to identify barriers to mobile device use around heritage sites. There’s a lot of conjecture it seems, in the literature but very little data. I think that’s partly because most of the audience research is based on questions asking “what would encourage you to use mobile devices” rather than “why wouldn’t you use them”.

Open Heritage Scholarship 2

Last week I was at London’s Digital Catapult centre, building on the discussion we started with the thinkathon in Winchester. This time round, we wanted to bring in some other voices from outside the academic sector, so I invited Lindsey Green from Frankly Green and Webb, and Kevin Bacon from who I met when he organised a fun workshop for the heritage sector. We also had Jake Berger from the BBC, David Tarrant from the Open Data Institute and Nigel Smith from FutureLearn. Graeme, Adam and Elenora were also there of course, as where Bryan and John from we are open.

Graeme started the day while we awaited all the delegates, by explaining at little bit about the Portus archaeology project, and how virtual access to a (until recently at least) mostly closed site had been enabled though through things like the MOOC, a relativity new on-line tour, a BBC/Discovery Channel TV documentary and open publishing of some academic papers. The opportunity, he said, was linking these and more resources, so an interest sparked by one could be satisfied by others.

Then everyone had the opportunity to introduce themselves and explain a little bit about what they hoped to get out of the day. One of the most exciting things I learned here was RES, Jake Berger’s project which the BBC has been surprisingly quiet about. This little video explains it better than I can.

We attempted to run the session a bit like the earlier thinkthon, but its interesting to note that with more people, it didn’t work quite as well. In Winchester, with a smaller group, the We Are Open guys nudged our discussion to explore interesting avenues more deeply. But with this larger group Bryan ended up drawing and drawing trying (and sometimes failing) to keep up, and not contributing as much as he was able to do in Winchester. Graeme compensated by taking more of a “chair” role than he had needed to do during the Thinkathon, but I think in the end the discussion was shallower. But new concepts reached more minds in the larger group, so I hope we may have scattered some seeds that will bare fruit in future.

We started talking about MOOCs and the Portus FutureLearn. Though an open course, some hoops have to jumped through to make the content open, and in fact not all the content is open, student’s own comments are considered their copyright by default, for example, so they can only be seen by other students. One of the advantages of massively open courses is the broad range of students they attract, with different backgrounds and levels of expertise. They may well being to the course, though a comment a unique insight which no-one had considered before of real value, not just to fellow students, but to the academics behind the course. But that insight can’t be shared from within the course. Permission must the sought from the student.

Some contributions are made using other platforms. For example, in the Portus MOOC students were asked to submit diagrams and photos on Flikr. On upload Flikr allows the user to set the level of open access to the file, but the user can’t change that after the original decision, and the default, is copyright, all rights reserved. So despite the various levels of Creative Commons protection offered by other options, most of the material uploaded in this manner is also closed, not open. We talked a little about incentive’s for users to consider Creative Commons when they share their work.

I don’t think the open badges idea that we talked about quite a lot at Winchester was specifically mentioned here, but on reflection I think its bubbling under. For example, we returned to the idea of Experience Playlists.

experienceplaylists

The idea of leaving a trail of breadcrumbs across digital and possibly even realworld platforms is attractive. Not just for the trailblazer to look back on, but for other users to follow. But should it be more explicit than, say, Amazon telling us “people who bought this also bought these” or Google ranking popular links? Could an open badge system residing in the background on people’s phone discreetly create a visit timeline, like the one I left at SF MOMA?

Then we tackled Heritage Organisations’ different understanding (fear?) of what Open means. Different laws pertain in different states for a start, so organisations ability to make stuff open could be limited by the state in which they operate. Then there is the issue of willingness, not just of heritage organisations – for example, a museum might own the physical artifact of a contemporary painting, but not its Intellectual Property, of which the artist (or their estate) might retain control. Then when the museum is the outright owner of a work, they may fear that opening up access to its reproduction limits the ability to generate much needed funds. Though, as Lindsey pointed out, in the Netherlands, the Rijksmuseum may have shown other institutions the way in that regard. Personally I came out of the discussion no less convinced that a Creative Commons, share alike, non-commercial proposition is something that heritage organisations should proactively embrace.

We had a go at working out what we might learn from Citizen Science projects, but by this time, I think we were all getting tired, and I’m not sure we came out with any useful conclusions. My own notes get scrappy here, but I do remember pointing out the critical-mass challenge for public participation in heritage, which has dogged crowdsourcing heritage projects like History Pin.

And that might be indicate a good place to finish this blog. We discussed what we were trying to achieve with all this. And no-one was expecting miracles. We know there will always be a steep curve on the axes of number of participants and depth of involvement: while hundreds of thousands or millions might passively watch a TV documentary about Rome, fewer and fewer will participate at deeper levels of interest and active participation. all we want (expect) to do is tweek that curve just a little bit. No even as much as this sketch suggests (though that would be nice):

Sketches from Bryan Mathers, weareopen.coop
Sketches from Bryan Mathers, weareopen.coop

 

#openheritagescholarship Thinkathon

thinkathon

Last week, went to Winchester School of Art to meet with some university colleagues to join a couple of facilitators from We Are Open, for a Thinkathon. “What,” I hear you ask “is a Thinkathon?”

I guess in less enlightened times, we might have called it a brainstorm, but it was a tight, friendly discussion/workshop to help us think through some challenges we’d set ourselves about open heritage scholarship, to wit (quoting from Graeme’s brief):

  • The nature and extent of user transitions from one open scholarship mechanism to one or more others e.g. one of the 40 million users who have already seen one of our documentaries following through to ePrints or our Massive Open Online Course, visiting Italy to see the archaeological site via a bespoke tour or paying to visit an exhibition.
  • The impact of our improved system on user engagements with each mechanism e.g. reading and commenting on Arkivum or ePrints datasets; public sharing of related content via social media. This will identify the opportunities for monetising activities in open scholarship
  • The impact of the design of the open scholarship ecosystem on these user journeys, building on previous work including video annotation, navigation via 3d content, interactive mapping, and timelines and multimedia navigation.

One thing that set it apart from your more traditional brainstorming session was the presence of Bryan from We Are Open, who constantly drew as we (and he) talked, projecting his doodlings up onto a screen so we could watch our ideas take shape as we came up them. Some of his sketches illustrate this piece.

So what did we conclude? Well the second half of the day went down a credentials rabbit hole, which was fun (and interesting) but I think, probably not yet where we are in the project. The Portus MOOC which in the new year will have its fifth intake, has been a great experiment in open education, and more Heritage Organisations are taking their first steps into those waters. But the challenge (I think) is to test the willingness of heritage organisations to think “open” (at least in the digital world) rather than strictly controlled and moderated. I’d like to get these guys from We Are Open into a room with my professional colleagues, and with others from Historic Royal Palaces, English Heritage etc. I learned that week that John from We Are Open actually started is working life with the National Trust, before moving on to organisations like Mozilla, so it would be fun to join the circle and get him involved again.

Can the PORTUS project afford it though?

P.O.R.T.U.S is go!

A week or two back, I had an interesting conversation with my supervisor, which I didn’t think I should mention on-line until, today, he invoked the “inverse fight club rule”. So I can now reveal that P.O.R.T.U.S stands for Portus Open Research Technologies User Study – yes, I know, as Graeme said “recursive-acronym-me-up baby.” This isn’t the Portus Project, but but it does ride on the back of that work, and (we hope) it will also work to the Portus Project’s benefit.

P.O.R.T.U.S is a small pilot project to explore better signposting to open research, so (for example) people interested in the BBC Documentary Rome’s Lost Empire, (which coincidentally is repeated TONIGHT folks, hence my urgency in getting this post out) might find their way to the Portus Project website, the FutureLearn MOOC,  the plethora of academics papers available free through ePrints (this one for example) or even raw data.

Though the pilot project will use the Portus Project itself as a test bed, we’re keen to apply the learning to Cultural Heritage of all types. To which end I’m looking to organise a workshop bringing together cultural heritage organisations, the commercial companies that build interpretation and learning for them, and open source data providers like universities.

The research questions include:

  • What are the creative digital business (particularly but not exclusively in cultural heritage context) opportunities provided by aligning diverse open scholarship information?
  • What are the challenges?
  • Does the pilot implementation of this for the Portus Project offer anything to creative digital businesses?

The budget for this pilot project is small, and that means the workshop will have limited places, but if you are working with digital engagement, at or for cultural heritage sites and museums,. and would like to attend, drop me a note in the comments.

The Big Why #IdeatoAudience

Yesterday, I went to Digital: From Idea to Audience, a small conference (more of a large workshop actually) put together by Royal Pavilion and Museums, Brighton and Hove, with funding from Arts Council England. I might have enjoyed a trip to Brighton, but this actually took place in central London, just across the road from the BBC.

The programme was put together by Kevin (not that Kevin) Bacon, Brighton’s Digital Development head honcho. (By the way – I’m going to quote from this post in my forthcoming presentation at Attingham.) Kevin stated at the outset that the day didn’t have a theme as such, but rather a “Nuts and Bolts” conference, a response to many of the questions he had been asked after making presentations elsewhere. He hadn’t briefed the speakers, only chosen them because he had felt they might have experiences and learning of use to people working on digital projects.

But if a united theme came out of the day, then it was Keep Asking Why?

Kevin kicked off the day talking about his work at Royal Pavilion and Museums, Brighton and Hove, a number of sites across the city (including Pavilion istelf, Preston Manor, the Booth museum and both Brighton and Hove museums) that attract around 400,000 visitors a year. They hold three Designated collections (of national importance). He wanted to talk about two digital projects one of which was (broadly) unsuccessful, and the other (broadly) successful.

The first was Story Drop, a smartphone app that took stories from the collection out into the wider city. GPS enabled, it allowed people to take a tour around the city based on an object from the collection. Get to a location and it tells you more about it, and unlocks another object. As an R&D project, it worked. Piloting it, they had very favourable responses. So they decided to go for a public launch in January of 2014. The idea being that lots of local people would have got a new phone for Christmas, and be keen to try out a new app.

The launch turned out to be a damp squid. The weather was partly to blame, January 2014 was one of the wettest on record. But even when the streets dried out, take-up was not massive. Kevin said to me during the break, that maybe only hundreds of people have downloaded the app to date, two years later. He showed a slide detailing some of the reasons why people weren’t using it.

barrierstoapp

These reasons chimed with my own research. It wasn’t an unmitigated failure, people do love it – but only for a very small number of people. So he said, think about why people will use your digital project.

Which is the approach he took for the redevelopment of the Museum’s website, shifting from designing for demographics to designing for behaviours (motivations, needs, audiences). And that was far more successful : 23% increase in page views and 230% increase in social shares.

Then, Gavin Mallory from CogApps took the floor to talk about briefs. He has already put his presentation on Slideshare.  As experienced providers to the cultural heritage industry, they’ve seen a lot of briefs. Some good, some wooly, or overly flowery, too loose, too tight, too re-cycled, or as Giles Andreae would have it “no [briefs] at all!” I must admit, I’ve been guilty of a few of those.

After lunch Graham Davies, Digital Programmes Manager, National Museum of Wales and asked (emphatically) Why? Or rather, why digital? I think the titale of his session should have been “From Digital Beaver, to Digital Diva”, which its something he said, but he didn’t call it that, but it was a really useful set of challenges to make when somebody says “we need an app” or “an iPad to do this.”

 

I’m running out of time so I’ll finish with just one quote from the final speaker. Tijana Tasich, who has worked at Tate and is currently consulting to the South Bank Centre. Talking about usability testing, she said “we used to test just screens and devices, but with iBeacons etc. we are increasingly testing spaces.”

Information Commissioner’s Office on mobile location analytics

Heritage sites experimenting with MLA take note. The ICO yesterday released a blog post addressing the potential danger to privacy of Mobile Location Analytics and, incidentally, Intelligent Video Analytics. Simon Rice, Group Manager for Technology, who also sits on the International Working Group on Data Protection in Telecommunications, says “Here at the ICO, we’re interested in Wi-Fi location tracking because it could involve the use of personal data. This means it falls under the Data Protection Act and that’s where we come in. […] The use of this type of technology is not just confined to the retail environment – airports, railway stations and even city-wide Wi-Fi networks could use it to monitor individuals. […] Therefore the working group has written a list of recommendations for use of the technology.”

The working paper itself is worth a read, and definitely more balanced than some newspaper coverage (as usual). It makes many references to checking out what you are planning against the local legislation wherever you are working, but also recommends seven safeguards that should be built into your work (and which, I imaging will be built into legislation over time):

  1. Notification to individuals – Organisations must ensure that there is sufficient information, including a range of physical and digital signage, to clearly inform individuals that location technology is in operation. The information must clearly state the purpose for collection and identify the organisation responsible. It is recommended that the industry develop a standard symbol which can be distributed throughout an area to remind individuals that the technology is in operation, similar to the effect from CCTV signage. Specific consideration must be given to staff, employees or other individuals who, if not excluded from the tracking, may be subject to extensive data collection;
  2. Limiting the bounds of data collection – Collection should only take place once the
    individual has been suitably informed and organisations must not seek to collect and
    monitor outside their premises. This can be achieved through careful placement of receivers, limiting data collection through a sampling method and to specified time periods or times of day (e.g., during store opening hours). The frequency of collection
    should also be limited to that which supports the specified purpose. The use of airgaps to create a non-contiguous data collection area and ensuring that collection only takes place in areas which are relevant to the specified purpose should also reduce the risk of privacy intrusion. Organisations should also seek to identify “privacy zones” where no tracking can take place as a result of technical or physical measures. This can be important in areas which have particular sensitivity such as toilets or rooms set aside for first-aid or worship. In jurisdictions where tracking outside of the organisation’s premises can be carried out in compliance with the law, sufficient safeguards should be in place to protect individuals’ privacy;
  3. Anonymise data without delay – Organisations should seek to delete or anonymise
    data as soon as the data is no longer required in its original form;
  4. Appropriate retention of individual level data – In cases where there is a clear legal
    basis for the processing of personal data, organisations should apply methods to
    convert unique identifiers, such as MAC addresses, into a form which reduces the potential for privacy intrusion. For example, if the identification of repeat visits is not envisaged then pseudonymising the identifier would prevent this possibility yet still provide sufficient analytics of daily footfall and routes taken. At the end of the legally
    permissible retention period, the relevant data should be anonymised or securely destroyed. An analysis comparing events over multiple reporting periods (e.g., percentage change in visitors in a given period of time) can be performed by comparing individual period aggregates;
  5. Consent for the combination with other information – Individuals should be fully
    informed when location data is intended to be combined with other information such
    as transaction history. This is especially relevant when location tracking is added as a
    feature to an existing loyalty scheme, for example, adding BLE beacon functionality to
    an existing retailer’s smart phone app. The user’s acceptance of an update via the
    app store is unlikely to be sufficient to qualify as being fully informed. Legislation in
    some jurisdictions may also require explicit consent for certain types of personal
    data;
  6. Consent for the sharing of individually identifiable data with third parties – Organisations should not share data which could be used to identify an individual with
    third parties without the valid informed consent of the individual concerned (this would include sharing data with other clients of a single third-party location analytics provider) unless there is a lawful exception; and
  7. Implement a simple and effective means to control collection – Organisations
    should also establish a system which allows individuals to control the collection of
    such data even in cases where this is not explicitly required by applicable privacy legislation. Organizations should prominently display the existence of choice and control options in the area of data collection. This should include an easily accessible, clearly communicated and effective means to exert control. It is recommended that a single mechanism be supported by all operators of location analytics services such that an individual is only required to express their preference once. If the tracking is based on informed consent then individuals must be enabled to revoke their consent in an easy and persistent manner. Where technically possible, clear audit trails allowing end users to know when and for what purpose data has been collected about their devices and by whom would also be recommended. Users should also be enabled to delete all or part of the previously collected data.

Heritage Jam 2015 – sign up soon!

Heritage Jam at York University – registration opens on 20th August

I had a great Skype chat today with Neil and Paul from Info-Point. I’d first met them a couple of years back, and wrote about their product here. In fact, I’d put them in touch with one of my client properties at the time, Saddlescombe Farm, that had a problem which I thought Info-point might be the perfect solution for. It was – and Info-point have now supplied solutions to a number of out-of-the-way (and out of signal) National Trust sites across the country.

Their challenge is that they are technologists, not storytellers, but sometimes places come to them hoping they can supply the content, not just the platform. To this end, they are working hard at building a network of interpretation designers and content providers, who they hope will use their technology when heritage sites come calling.

We were chatting idly about setting up a two-day “hacking” event, to bring together heritage custodians, storytellers and technologists. While we were talking I thought “we could call it something like Heritage Jam!”

Afterwards I thought – “Heritage Jam… that too good an idea to be mine. Where have I heard it before?” and a quick Google later, I knew where. York University will be hosting Heritage Jam towards the end of September. I missed it last year, and made a mental not not to miss it this year. OK, so that mental note came back a bit garbled, but it came back in time for me to get myself on the mailing list. Registration opens and closes on the 20th August. So if you want to go, set a reminder in your diary! If you can’t get to York, there’s and on-line participation month kicking on the 20th of August too, so check that out.

Get ready for Karen #KarenIsMyLifeCoach

Yesterday I finished playtesting Blast Theory’s soon to be released app, Karen. I don’t want to say too much about it, because I don’t want to spoil any surprises for you, and it’ll shortly (hopefully next week, pending approval, and assuming is ran as well for other playtesters as it did on my device) be free to download for iOS from the App Store. So you’ll be able to try it for youself, Android users will also get their turn, but not quite as soon. Its a culmination of the work on profiling that Blast Theory have been exploring over the last couple of years.

Its a great piece of interactive art. I’ll go so far as to say the best interactive story I’ve played. If only because it manages to create a sublime sense of real interaction. I’m not making decisions for an avatar, like John Martsen in Red Dead Redemption, but for myself. I’m telling Karen about me, not about what, for example, Marcus in Blood and Laurels, might do. I can tell the truth or I can lie (in fact I shuffled uneasily between the two) but that choice is mine.

Do I change Karen’s story through my decisions? To be honest, I don’t know, I’ve only had time to play through once. But the illusion of true interaction was surprisingly effective.

I especially like the use of Lickert sliders to answer some questions, which allowed me to be more “true to myself” than the multiple choice answers available for the other questions. Karen’s in a nuanced story, and sometimes I wanted, but was unable, to give a nuanced reply.

It’s great fun, get it. Its free after all. Maybe don’t play it in front of the kids, Karen can turn the conversation on a dime to subjects you might not want them to listen to. And decide now what you are going to tell your significant other about playing it, because Karen will be asking about them too…

Oh and my name is in the credits, look:

IMG_5803

I made this! (Or rather, I bunged them a tenner on Kickstarter a while back). Oh, and hey, they are in the news already!

Synote, video and distance learning

I’ve been a bit quiet on this blog of late, partly because of devoting my time to two very interested but concurrent MOOCs. Both of them from University of Southampton and FutureLearn, they started in the same week. One, Shipwrecks and Submerged Worlds: Maritime Archaeology was only four weeks long, though, so having completed it, and this week’s work on Web Science: How the Web is Changing the World, I have a little more time to catch up with the blog.

Of course one of the ways in which the web is changing the world, is the provision of this sort of education. And for the duration of these courses I’m always getting distracted by the learning experience itself. Lats time, it was participation on the forums that sparked my interest. This time its video. The videos on FutureLearn seem short, three, four, or at the most, seven minutes long. Contrast this with the ones on the Coursera course I did on statistics: they were 20 to 30 minutes long. Looking at the guidance FutureLearn offers for partners creating course content, the recommendation is no more than ten minutes.

I’d prefer something longer. To be honest, what I really wanted was an audio only podcast, to listen to on as I drive for work. My gold standard is In Our Time, the discussion programme hosted by Melvyn Bragg on BBC Radio Four. But that’s by the by, the video content on FutureLearn seems to be the briefest of introductions to concepts, to shallowest of discussions, not a developing and involving narrative (though I don’t recall thinking that with the Portus MOOC, which is interesting).

I guess one of the reasons why they keep the videos short is that they want to enable people quickly discussing the subject on the forum. It would be difficult to retain an interesting thought you had during the video, if you have to wait 20 minutes for the video to end. Then there’s the short quizzes, which give participants an opportunity to reflect on what they’ve learned. Coursera had a system where they could include these in the video itself. Indeed, if I recall correctly, you couldn’t continue with the video until you’d had a go at the quiz. FutureLearn treats the quizzes as separate elements, normally towards the end of the week, and only occasionally during the week’s content but always on a separate page. The Coursera system, in a crude way, lets you interact with the video. FutureLearn treats the video as a discreet element.

Don’t get me wrong, I’m not saying I’d prefer longer videos to the text articles that FutureLearn offers. I’m just as happy to learn by reading as by watching. Its just that I feel the short video format doesn’t use the medium to its full potential. Video has a great ability to compress or expand time, overlay the real with the imaginary, and explore distance, but those abilities need room to breath.

Last week I was invited to have a look at a technology that might reconcile my desire for longer videos with the the didactic need to discuss what we’re watching. Synote is an application developed by Mike Wald at Southampton University to make “multimedia resources such as video and audio easier to access, search, manage, and exploit. Learners, teachers and other users can create notes, bookmarks, tags, links, images and text captions synchronised to any part of a recording, such as a lecture.”

Mike and PhD student Yunjia Li showed us a new version of the application, currently in development, with a view to making it usable for MOOC learners as well as others. They showed us how easy it is to play a video through Synote and while its playing, make comments, that are timecoded to particular parts of the video, comments can even to be attached to particular areas of the screen. Comments can link to other web-based resources, anything with a URI in fact. And as every comment has a URI of its own, you can link from one section of the video to another section of a related video, effectively making your own “mash-up” (although with buffering it won’t be quite as slick as something edited together).

Adam, a colleague from the University’s Winchester School of Art was also (virtually) at the meeting, and soon set up a group of his students to help design a better user interface. You can read about their exciting and efficient workshop here.

So as I’ve worked though this week’s content for the Webscience MOOC, I’ve been thinking about Synote and how it might be used. To be honest the main course content videos seem too short to reward the effort of running them through a different web viewer just to be able to tag your comments to a particular place in the video. And reading the comments, just one commenter (at the time of writing) seems to have felt any need to refer to a particular point in the video. It seems the brevity of the videos might actually contribute to the generalness of the comments.

However, the MOOC has sent us off to the TED website to look a couple of longer videos there. Often the “See also” links at the end of an article point to videos. And these videos are often longer (the TED ones run just under fifteen minutes), and on these videos I think it would be good from a learning point of view, to be able to tag comments to particular sections of the video. For example, a couple of commenters included links to videos that weren’t part of the “see also” course related material. They might have preferred to have the ability to point their fellow students to the particularly relevant section of each video.  One such video was a TED talk by Daniel Dennett, always a favourite of mine. He quoted a lovely reference about five minutes 40 seconds in, about how “‘real magic’ doesn’t exist. Conjouring, the magic that does exist is not ‘real magic'”. Now I’d like to point you, dear reader to that moment, but I’ve taken two lines of text linking you the the video and telling you where to find the bit that I thought was particularly funny. It would have been so much easier if I’d been using Synote.

So, imagine a MOOC assignment that said “watch these through Synote and share/mash up the bits that are most relevant to what we’ve been discussing”. Imagine participants, setting up a Synote playlist of all the most relevant bits of TED talks to the subject they are discussing. Imagine in the Daniel Dennett talk above where he asks the audience to spot changes in a series of short videos, participants actually being about to mark exactly where on screen and in which frame they first noticed the change.

All of these are things that Synote is capable of.