This week I’ve been working on preparing my assignment but, by weird and slightly unfortunate coincidence, I’ve also had a series of presentations to give this week so I have been collating those as well. In fact I’ll start there.
Having shared some very vague ideas in last week’s tutorial I also spoke to Jen over Skype early this week to firm up my assignment topic so I wanted to talk a little about why the assignment idea I posted this week started to emerge for me.
Both in my role as Social Media Officer and my day to day life online I am becoming increasingly interested in how a website’s design contributes to behaviour. In particular social sites use clever prompts and automatic details to try and further gain personal details, new friends, and continued logins and participation in the site. Sometimes that’s appreciated, sometimes it makes people seriously angry. In the course of the last few months there have been stories of Facebook telling users to “reconnect” with their less active – and deceased – friends, leading to a new change in policy that allows profiles to be mothballed as “tribute pages” – taking your details in life and into death. On a less dark note friends of mine – a married couple – get weekly suggestions that that they should message/reconnect/poke/suggest friends for each other. It sounds silly but after amusing them they’ve started to find it mildly but genuinely disconcerting.
On Twitter the number of accounts that prove to be spam with pornographic profile images has increased massively lately but, more more peculiar, is the use of the Twitter API (Application Programming Interface) to create bizarre amalgamations of genuine posts into new Twitter accounts that follow others and post links to various sites (rarely are these spam or scam links). This is presumably a form of nefarious SEO (Search Engine Optimisation) but the disconcerting thing as a Twitter user is that it is very hard to tell the difference between these users and a genuine user. In some cases the mixture of posts gives it away easily but often it is a subtle judgement call that requires reading a page or two of Tweets and spotting some strange pattern of mentions of a site, or of inconsistent personal comments. This is not a new practice – it has been happening on popular blogging platforms for some time – but there is something about the availability of the API and the shortness of the posts that makes this far more uncanny than more obvious blogging efforts.
At the same time websites are adding social features, adding buttons to easily give permanent access to Facebook or Twitter from another site – and potentially automatically share all content – and generally encouraging users to casually change behaviours around a site and how enthusiastically they share content from that site. At the most basic level such mechanical interventions go back to automated emails, reminders and recommendations Richard has already commented that Amazon’s recommendation engine is a particularly lucrative intervention. Some may not like this part of the site but the more subtle (and slightly more recent) intervention that I suspect proves even more useful to Amazon is their related link – on most items pages – to deals featuring the current product plus one other item. Often savings here are under 5 pence difference from list prices but they are surprisingly engaging.
Automatic mechanical interventions are not restricted to commercial sites, prompts, alerts and automated interaction are a part of the MyEd site that I log into to access resources for this course. Alerts are what, in WebCT, keep many eLearning (and hybrid learners) informed about deadlines, events, changes to courses etc. And in online academic data services – which is what my workplace, EDINA, run – it is often a challenge to find the balance between helpful interventions that guide the user around a site and unhelpful interventions that may be invasive and/or might dissuade return/expert users.
This is why I felt this specific area of educational and social online services would be so fascinating to look at and why it fitted well with some of the notions in this class. Looking at digital utopias and dystopias we have considered the idealism that persist around online communities, I think mechanical interventions in these spaces can have a dystopian or uncanny feel. However when they works prompts from the machine can be enhancing, can replace low level thought and memory around mundane tasks (e.g. reminders on ediaries and calendars) and can contribute to a productive sense of post human interaction.
Related to this idea of the post human and connected body I thought this was a good time to talk about the talks I have been working on this week. The first (“Staying eLive”) was given to the University of Edinburgh’s LAMP (Library, Archive and Museum Professionals) Forum and was a modded and updated version of a talk I gave earlier this year (”eVentures of an eLife”) and was about the way that my life online – work, study and personal elements – all merge together, overlap, feed ideas that spread across all areas of my life and basically form a huge part of me. Eagle eyed blog watchers will not be surprised that this time around the role of online life had increased importance with this presentation coming so soon after me week of total disconnect from the online world.
One of the questions I was asked – alongside excellent questions about legal issues of cloud computing and the brilliantly easy to answer “do you ever get tired” (“yes!”) – was how I could manage the sheer volume of information I encounter on a daily/weekly basis. I shared some of my tips – bookmarking, contact databases and such – but actually looking across all the items I have been looking at this week I can see that even methodical ideas cannot come close to allowing me to either discover or recall all the possible sites, services and interesting blogs and spaces I would ideally be monitoring and working with. At the same time a couple I know are about to go to China to visit family members and their descriptions of what is/is not likely to be accessible on the internet there has been scarily enlightening. Both of these elements remind me of either some sagely advice or a big PR cock up – depending on which wording you go for:
“The only true wisdom is in knowing you know nothing.”
As the amount of information increases exponentially and we lend our trust to the machine to manage this for us I wonder how we ever fully comprehend the scale or nature of what we do or do not know. This isn’t just about findable or banned sites but is also about language. How can I see all the web or social sites that are huge in another country, culture or language if I am only looking at the English speaking area of the web. It’s like knowing only a small area of a huge city. When the web was younger it was easier to end up baffled, confused, but in somewhere genuinely unfamiliar. I, like many others, rely on search engines, wise contacts (often on Twitter), friends, and advertising or journalism around some sites to find new spaces on the web but I wonder how one could ever keep up more directly. The scale is now itself post human and I think that idea of not knowing what you don’t know may have interesting long term political impact as those thinking they are looking at the world only see a small cross section. Most intriguing.
The other presentation I have been working on this week is a talk (Licence to Share) for the eScience All Hands Meeting 2009 on ShareGeo and Go-Geo!, two data services run by EDINA which are both concerned with making geospatial data sets more visible and more available for sharing and reuse. Both my talk and a lot of my work, searching, and bookmarking this week has been around how one deals with notions of trust at one step remove. If you share data through a repository or sharing service then you need to be assured that (a) your licensed content has been shared only with appropriate audiences and (b) it is going to be used responsibly and (c) there is some incentive for you to share your data. This is a really interesting area when more and more services become crowd sourced (with data, including personal data, a commodity of the social spaces) and as the academic community – and scholarly communication norms particularly in the sciences – moves towards the more transparent sharing of data.
I’m not sure I have solid conclusions here but I think incidents like the University of East Anglia Climate Change hacking help to raise concerns and suggest that an ongoing data destruction policy may be legally safer than long term storage of all data. This seems to somewhat go against the possibilities of Moore’s Law – which would suggest you could keep storing and processing data even as it grows exponentially – and the current notion of deposit libraries. Such possibilities begin to raise major questions about trust and liability of user generated content and the regulation of the web. Indeed the recent news coverage of Google’s search results for Michelle Obama suggest a demand for a regulated curated web rather than impartial third party methods to access what is already out there. This is quite a change in digital culture and I suspect it stems from the relatively recent and fairly sudden mainstreaming of broadband connectivity (particularly in the UK) and it’s driver in the idea that any school age child needs (highly regulated or monitored) access to the internet. I wonder if newer internet users have, yet, been properly characterised as a weakly linked digital culture or tribe. Most of what we have looked at this semester has been theoretical debate about the social and cultural possibilities of online spaces but many of these were written – or at least conceived – in a rather different set of spaces or era of usage of the internet. I think it would be really interesting to look at (and maybe include as readings next time around) some form of discourse from a much more average internet user position (though it is hard to know the best place to engage with new/inexperienced internet users around these issues).
Most of the non-positive voices we have heard here have been either strong negative or cynical about online communities, technological futures etc. I think that the more normative voice of the “average” internet user is something less passionate and more parochial. I think that there are core concepts around place, privacy, threat and – in light of the latest Google headlines (and indeed items like news coverage of suicides and social networking profiles) – expectations of curatorial roles of web and social sites and search engines that are under explored at present. I would love to see a comparative study of expectations of physical neighbourhoods and expectations of major trusted internet sites as I suspect that a socially responsible, broadly moral and vaguely conservative attitude towards what should or should not be visible would occur across expectations of both spaces. If correct this is in fairly radical contrast to the early days of the internet and utopian visions of what it can and should be for. But, if true, it would also represent a naive stance given the role of machines, spammers, scammers and genuine bugs and glitches in programming that would make for interesting links to challenges in the area of digital literacy.
OK, I think I have blogged too long already here. I just wanted to reflect on some sense of absence of non-passionate, but influential, opinions in the discussions around digital cultures and behaviours. Passivity can often be invisible particularly to those at the heart of a topic – for instance there are many elections where non-voters represent a bigger majority than any one political view or party – and engaging with those who do not offer up a voice directly can be tricky even if these people voice interesting and/or critical opinions in other spaces. Since we have talked of absence and presence in online spaces recently I thought it was worth talking about absence and presence in arguments and literature about absence from digital culture and digital vs. physical cultural clashes.