Category Archives: Uncategorized

Missing the point in gesture-based interaction

This is a draft of a column I wrote for the ACM’s interactions magazine. It will appear mid 2011.

_____________________________________________________

“Zhège” she said, pointing emphatically at the top right of her iPhone screen.  She leaned further into the gap between the passenger and driver seat of the taxi. Then, lifting her head, she pointed forward through the windscreen in a direction that, I assumed, was where we were hoping soon to be headed.

The taxi driver looked at her quizzically.

Undeterred, she repeated the motion, accompanied by a slower, more carefully enunciated rendition of the word: “zhège”. This time she added a new motion. She pointed at the bottom left of her iPhone screen, at herself, at the taxi driver himself, and then at the ground below us. Balletic though this motion was, it did not reduce the look of confusion on the driver’s face.

Gently taking the device from her hand, he studied the screen. A moment later, his expression changed. He smiled and nodded. He stretched out the index finger on his right hand, pointed to the location on the screen she had isolated, and said “zhège”. He handed the device back to her, flipped on the meter, and grasped the steering wheel. A second later we accelerated out of the taxi rank. He had understood the point of her point(s).

My traveling partner, Shelly, and I know precisely 6 words of Chinese. ‘Zhège’ is one of them. We cannot actually pronounce any of the words we know with any consistency. Sometimes, people nod in understanding. Mostly they don’t. However, the scenario I painted above is how we navigated two weeks in China. The word ‘navigated’ is intentional–it is about the physical and conceptual traversal of options. We navigated space and location. We navigated food. We navigated products. We navigated shopping locations, shopping possibilities and shopping traps (always a concern for tourists, wherever they may be). We did all this navigation speechless; owing to our linguistic ignorance, we accomplished it by pointing. We pointed at menus. We pointed at paper and digital maps. We pointed at applications on our phone screens. We pointed at ourselves. We pointed at desired products. We pointed in space toward unknown distant locations… Basically, we pointed our way to just about all we needed and/or wanted, and we got our way around Beijing with surprisingly few troubles.

Pointing of this kind is a deictic gesture. The Wikipedia definition for ‘deixis’ is the “phenomenon wherein understanding the meaning of certain words and phrases in an utterance requires contextual information. Words are deictic if their semantic meaning is fixed but their denotational meaning varies depending on time and/or place.” [1] In simpler language, if you point and say “this”, what “this” refers to is fixed to the thing at which you are pointing. In the scenario above, it was the location on a map where we wanted to go. Linguists, anthropologists, psychologists and computer scientists have chewed deixis over for decades, examining when the words “this” and “that” are uttered, how they function in effective communication and what happens when misunderstandings occur. In his book Lectures on Deixis, Charles Fillmore describes deixis as “lexical items and grammatical forms which can be interpreted only when the sentences in which they occur are understood as being anchored in some social context, that context defined in such a way as to identify the participants in the communication act, their location in space, and the time during which the communication act is performed”. Stephen Levinsohn in his 1983 book, Pragmatics, states that deixis is “the single most obvious way in which the relationship between language and context is reflected”.

Pointing does not necessitate an index finger. If conversants are savvy to each other’s body movements–that is, their body ‘language’–it is possible to point with a minute flicker of the eyes. A twitch can be an indicator of where to look for those who are tuned in to the signals. Arguably, the better you know someone, the more likely you will pick up on subtle cues because of well-trodden interactional synchrony. But even with unfamiliar others, where there is no shared culture or shared experience, human beings as a species are surprisingly good at seeing what others are orienting toward, even when the gesture is not as obvious as an index finger jabbing the air. Perhaps it is because we are a fundamentally social species with all the nosiness that entails; we love to observe what others are up to, including what they are turning their attention toward. Try it out sometime, stop in the street and just point. See how many people stop and look in the direction at which you are pointing.

Within the field of human-computer interaction–HCI–much of the research on pointing has been done in the context of remote collaboration and telematics. However, pointing has been grabbing my interest of late as a result of a flurry of recent conversations where it has been suggested that we are on the brink of a gestural revolution in HCI. In human-device/application interaction, deictic pointing establishes the identity and/or location of an object within an application domain. Pointing may be used in conjunction with speech input–but not necessarily. Pointing does not necessarily imply touch, although touch-based gestural interaction is increasingly familiar to us as we swipe, shake, slide, pinch and poke our way around our applications. Pointing can be a touch-less, directive gesture, where what is denoted is determined through use of cameras and/or sensors. Most people’s first exposure to this kind of touch-less gesture-based was when Tom Cruise swatted information around by swiping his arms through space in the 2002 film Minority Report. However, while science fiction interfaces often inspire innovations in technology–it is well worth watching presentations by Nathan Shedroff and Chris Noessel and by Mark Coleran on the relationship between science fiction and the design of non-fiction interfaces, devices and systems–there really wasn’t anything innovative in the 2002 Minority Report cinematic rendition of gesture-based interaction, nor in John Underkoffler’s [1] presentation of the non-fiction version of it, g-speak, in a TED Talk in 2010. Long before this TED talk, Richard Bolt created the ”Put that there” system in 1980 (demoed at the CHI conference in 1984). In 1983 Gary Grimes at Bell Laboratories patented the first glove that recognized gestures, the “Digital Data Entry Glove”. Pierre Wellner’s work in the early 1990’s explored desktop based gesture based interaction and Thomas Zimmerman and colleagues used gestures to identify objects in virtual worlds using the VPL DataGlove in the mid 1980’s.

This is not to undermine the importance of Underkoffler’s demonstration; gesture-based interfaces are now more affordable and more robust than these early laboratory prototypes. Indeed, consumers are experiencing the possibilities everyday. Devices like the Nintendo Wii and the Kinect for Xbox 360 system from Microsoft are driving consumer exuberance and enthusiasm for the idea that digital information swatting by arm swinging is around the corner. Anecdotally, an evening stroll around my neighbourhood over a holiday weekend will reveal that a lot of people are spending their evenings jumping around gesticulating and gesturing wildly at large TV screens, trying to beat their friends at flailing.

There is still much research to be done here, however. The technologies, their usability but also the conceptual design space needs exploration. For example, current informal narratives around gesture-based computing regularly suggest that gesture-based interactions are more “natural” than other input methods. But, I wonder, what is “natural”? When I ask people this, I usually I get two answers: better for the body and/or simpler to learn and use. One could call these physical and cognitive ergonomics. Frankly, I am not sure I buy either of these yet for the landscape of current technologies. I still feel constrained and find myself repeating micro actions with current gesture-based interfaces. Flicking the wrist to control the Wii does not feel “natural” to me, neither in terms of my body nor in terms of the simulated activity in which I am engaged. Getting the exact motion on any of these systems feels like cognitive work too. We may indeed have species specific and genetic predispositions to being able to pick up certain movements more easily than others, but that doesn’t make most physical skills “natural” as in “effortless”. Actually, with the exception of lying on my couch gorging on chocolate biscuits, I am not sure anything feels very natural to me. I used to be pretty good at the movements for DDR (Dance Dance Revolution) but I would not claim these are movements in any sense natural, and these skills were hard won with hours of practice. It took hours of stomping in place before stomping felt “natural”. Postures and motions that some of my more nimble friends call “simple” and “natural” require focused concentration for me.  “Natural” also sometimes gets used to imply physical skill transfer from one context of execution to another. Not so. Although there is a metaphoric or inspired-by relationship to the ‘real’ physical work counterparts, with the Wii, I know I can win a marimba dancing competition by sitting on the sofa twitching and I can scuba-dive around reefs while lying on the floor more or less motionless, twitching my wrist.

An occupational therapist friend of mine claims that there will be a serious reduction in repetitive strain injuries if we could just get everyone full-body gesturing rather than sitting tapping on keyboards with our heads staring at screens. It made me smile to think about the transformation cube-land offices would undergo if we redesigned them to allow employees to physically engage with digital data though full-body motion. At the same time, it perturbed me that I may have to do a series of yoga sun salutations to find my files or deftly execute a ‘downward facing dog’ pose to send an email. In any case, watching my friends prance around with their Wiis and Kinects gives me pause and makes me think we are still some way away from anything that is not repetitive strain injury inducing; we are, I fear, far from something of which my friend would truly approve.

From a broader social perspective, even the way we gesture is socially prescribed and sanctioned. It’s not just that you need to have a gesture be performed well enough for others to recognize it. How you gesture or gesticulate is socially grounded; we learn what are appropriate and inappropriate ways to gesture. Often assessments of other cultures’ ways of gesturing and gesticulating are prime material for asserting moral superiority. Much work was done in the first half of the 20th century on gestural and postural characteristics of different cultural groups. This work was inspired in part by Wilhelm Wundt’s premise in Volkerpsychologie that primordial speech was a gesture and that gesticulation was a mirror to the soul. Much earlier than this research, Erasmus’ bestseller De civilitate morum puerilium [8], published in 1530 an admonition that translates as “[Do not] shrug or wrygg thy shoulders as we see in many Italians”. Adam Smith compared the English and the French in terms of the plenitude, form and size of their gesturing. “Foreigners observe that there is no nation in the world that uses so little gesticulation in their conversation as the English. A Frenchman, in telling a story that is of no consequence to him or anyone else sill use a thousand gestures and contortions of his face, whereas a well-bred Englishman will tell you one wherein his life and fortune are concerned without altering a muscle.” [2]

Less loftily, cultural concerns for the specifics of a point were exemplified recently when I went to Disneyland. Disney docents point with two fingers, not just an outstretched index finger but both the index finger and middle finger. When asked why, I was informed that in some cultures pointing with a single index finger is considered rude. Curious, I investigated. Sure enough, a (draft) Wikipedia page on etiquette in North America states clearly “Pointing is to be avoided, unless specifically pointing to an object and not a person”. A quick bit of café-based observation suggests people are unaware of this particular gem of everyday etiquette. Possibly apocryphally, I was also told by a friend the other night when opining on this topic that people that in some Native American cultures it is considered appropriate to point with the nose. And, apparently some cultures prefer lip pointing.

So bother with this pondering on pointing? I am wondering what research lies ahead as this gestural interface revolution takes hold. What are we as designers and developers going to observe and going to create? What are we going to do to get systems learning with us as we point, gesture, gesticulate and communicate? As humans, we know that getting to know someone often involves a subtle mirroring of posture, the development of an inter-personal choreography of motion–I learn how you move and learn to move as you move, in concert with you, creating a subtle feedback loop of motion that signifies connection and intimacy. Will this happen with our technologies? And how will they manage with multiple masters and mistresses of micro-motion, of physical-emotional choreography? More prosaically, as someone out and about in the world, as digital interactions with walls and floors become commonplace, am I going to be bashed by people pointing? Am I going to abashed about their way of pointing? Julie Rice and Stephen Brewster of Glasgow University in Scotland have been doing field and survey work on just this, addressing how social setting affects the acceptability of interactional gestures. Just what would people prefer not to do in public when interacting with the digital devices, and how much difference does it make if they do or don’t know others who are present? Head nodding and nose tapping apparently are more likely to be unacceptable than wrist rotation and foot tapping [3]. And what happens when augmented reality becomes a reality and meets gestural interaction? I may not even be able to see what you are thumbing your nose at–remembering that to thumb one’s nose at someone is the highest order of rudeness and indeed the cause of many deadly fights in Shakespearean plays–and I may assume for the lack of shared referent that it is in fact me not the unseen, digital interlocuter at whom the gesture is directed. And finally, will our digital devices also develop subtle sensibilities about how a gesture is performed beyond simply system calibration? Will they ignore us if we are being culturally rude? Or will they accommodate us, just as the poor taxi driver in China did, forgiving us for being linguistically ignorant, and possibly posturally and gesturally ignorant too? I confess; I don’t know if pointing with one index finger is rude in China or not. I didn’t have the spoken or body language to find out.

NOTE: If you are as bewildered by the array of work on gesture-based interaction that has been published, it is useful to have a framework. Happily, one exists. In her PhD thesis Maria Karam [4] elaborated a taxonomy of gestures in the human computer interaction literature, summarized in a working paper written with m.c. schreafel [5]. Drawing on work by Francis Quek from 2002 and earlier work by Alan Wexelblat in the late 1990’s this taxonomy breaks research into different categories of gesture style: gesticulation, manipulations, semaphores, deictic and language gestures.


[1] John Underkoffler was the designer of Minority Report’s interface. The g-speak tracks hand movements and allows users to manipulate 3D objects in space. See also SixthSense, developed by Pranav Mistry at the MIT Media Lab.
[2] For more on this see A Cultural History of Gesture, Jan Bremmer and Herman Roodenburg, Polity Press, 1991
[3] Rico, J. and Brewster, S.A. Usable Gestures for Mobile Interfaces: Evaluating Social Acceptability. In Proceedings of ACM CHI 2010 (Atlanta, GA, USA), ACM Press.

[4] Karam, M. (2006) PhD Thesis: A framework for research and design of gesture-based human-computer interactions. PhD thesis, University of Southampton.

[5] Karam, M. and schraefel, m. c. (2005) A Taxonomy of Gestures in Human Computer Interactions. Technical Report ECSTR-IAM05-009, Electronics and Computer Science, University of Southampton.

Advertisements

Resources on Experience Design, a selection

Here are some resources/references/publications on Experience Design that were used to develop a Tutorial at CSCW 2011 in China that Elizabeth Goodman, Marco de Sa and I prepared and delivered. Thanks to all Facebook friends who offered suggestions.

CAVEAT: This is NOT an exhaustive list obviously; there are some excellent resources out there. These just happen to be some we used when creating the tutorial. We also drew on some of the content that was taught at a workshop at UX Week 2010.

TEXTS

Industry perspectives on UX design

  • Peter Merholz, Todd Wilkens, Brandon Schauer, and David Verba (2008), Subject To Change: Creating Great Products & Services for an Uncertain World: Adaptive Path on Design. O’Reilly Media.
  • Bill Moggridge (2006) Designing Interactions. MIT Press
  • Mike Kuniavsky (2010) Smart Things: Ubiquitous Computing User Experience Design, Morgan Kaufman

Skills and techniques

  • Kim Goodwin (2009) Designing for the Digital Age: How to Create Human-Centered Products and Services, Wiley Publishing

Sketching and prototyping

  • Bill Buxton (2007) Sketching User Experiences, Elsevier (hovers in between academic and industry perspectives)
  • Dan Roam (2009) The Back of the Napkin: Solving Problems and Selling Ideas with Pictures, Penguin Group USA

Business

  • Roger Martin (2009) The Design of Business: Why Design Thinking is the Next Competitive Advantage, Harvard Business School Publishing.
  • Tim Brown with Barry Katz (2009) Change by Design. How Design Thinking Transforms Organizations and Inspires Innovation, Harper Collins.
  • Alan Cooper (1999) The Inmates are Running the Asylum, Macmillan Publishing Co., Inc.

Psychology and philosophy of experience

  • Don Norman (1998) The Design of Everyday Things. MIT Press
  • M Csikszentmihalyi (2003) Good Business: Leadership. Flow and the Making of Meaning (Viking, New York)
  • Csikszentmihalyi, Mihaly (1998). Finding Flow: The Psychology of Engagement With Everyday Life. Basic Books.
  • Byron Reeves and Clifford Nass (1996) The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places, University of Chicago Press.
  • Peter Wright & John McCarthy (2004) Technology As Experience. MIT press
  • Tor Nørretranders (1998). The User Illusion: Cutting Consciousness Down to Size. Viking.
  • Merleau-Ponty, Maurice. Trans: Colin Smith. Phenomenology of Perception (London: Routledge, 2005)
  • Wolfgang Iser (1980) The Act of Reading: A Theory of Aesthetic Response, John Hopkins University Press

Visual design

  • Stop Stealing Sheep (type usage)
  • UI Wizards.com

Storyboarding

  • Scott McCloud (2006) Making Comics, Harper
  • John Hart (2008) The Art of the Storyboard, Second Edition: A filmmaker’s introduction. Elsevier.
  • Wendy Tumminello (2004) Exploring Storyboarding (Design Exploration). Thomson Learning.

Video

  • Steven Douglas Katz (1991) Film directing shot by shot: visualizing from concept to screen, Focal Press (on movie making; good for video and transitional literacy)
  • Scott Kelby (2006) The Digital Photography Book, Peachpit Press

Some other books/resources we have drawn on in the past

  • Richard Saul Wurman (2000) Information Anxiety 2, Pearson Education
  • Works by Edward Tufte
  • Hiroshi Ishii’s work.
  • Neil Gershenfeld (1999) When Things Start to Think, Holt, Henry & Company, Inc.
  • Brain Rules by John Medina (see also http://brainrules.net/)
    What the Bleep Do We Know? (and ancillary pieces, such as the movie and the Web site)
  • Richard Sexton (1987) American Style Classic Product Design from Airstream to Zippo

Online resources
http://www.nathan.com/ed/index.html

We also watched a number of short presentations and interviews online with Experience Design researchers, practitioners and luminaries.

Making Time

[This is a draft of a column I wrote for the ACM’s interactions magazine.  It will appear mid 2011].
______________________________________________________

One morning, as Gregor Samsa was waking up from anxious dreams, he discovered that in his bed he had been transformed into a monstrous verminous bug.

Thus begins one of my favourite novels, The Metamorphosis by Franz Kafka. What is most remarkable about Gregor’s awakening, and his discovery that he has metamorphised into a dung beetle is that, in the minutes that follow, his greatest concern is that he has missed his train.

Like Gregor, time and schedules have been much on my mind of late. Why? Well, firstly, I overslept the other day. My phone is my alarm clock. Sadly, my phone had died quietly during the night. Ergo–no alarm to awaken me. Although I did not wake up a dung beetle, I was nevertheless disoriented. Secondly, about a week ago, I missed a meeting. Well, strictly speaking, I didn’t miss it because I didn’t know I was supposed to be at it. All I can surmise is that there had been a breakdown in the complicated network of services, applications, devices and people that constitutes the sociotechnical practice of time management called “calendaring”. The meeting was clearly listed on my colleague’s calendar, but not on mine.

So, given my recent horological mishaps, I have been ruminating on the concept of time and it’s management through calendars and alerts.

Calendars reckon past and/or future time. The primary purpose of the calendar is the orientation of our bodies and minds–and those of others–in time and space. In contrast to the fluidity of experienced time, calendars create boundaries between activities. They prescribe the amount of time we should spend on something:  30 minutes with Jane talking about her project, an hour for the meeting on budget, 1 hour giving a lecture on HTML-5, thirty minutes on a mandated management course…..and of course, finally, a day of rest.

To be effective social coordinators, calendars require that we share an idea of how time is structured, how it breaks down quantitatively. My minute and yours should both be 60 seconds; thus we can pass time at the same rate quantitatively–even if, qualitatively, for me the hours have rushed by and for you they have felt like swimming in treacle. And, we should share an idea of exactly when 8pm is if we are going to meet for dinner at 8pm.

Calendars don’t just keep individuals synchronised. Calendars, so scholars like the sociologist Emile Durkheim tell us, are central to societal order. Calendars are the sentinels of ‘appropriate’ behavior. Minutes and days and hours often have activities associated with them–indications of when we should work, rest, pray and/or play. Different social values are placed on different hours of the day and on days of the week; in many calendars Saturdays and Sundays are by default given less space, reflecting social norms that separate workdays from (non-work) weekend days. Routine, calendared time is central to creating a social sense of belonging. In his 2006 article, Tim Edensor argues that structured time in the form of everyday rhythms–which he breaks down into institutionalized schedules, habitual routines, collective synchronicities and serialized time-spaces–are how a sense of national identity and belonging is sustained. One can see this play out in my neighbourhood, wherein many different immigrant cultures reside. What is considered an appropriate time for dinner differs by several hours: between 6pm and 7pm for some, between 9pm and 10pm for others.

I suspect most of us take for granted the idea that we have a shared concept of time. However, the carving up of time into seconds, minutes, hours, days, months and years is a convention, and the familiar structure of the predominant western calendar–the Gregorian calendar, which was only in introduced in 1582–differs from classical calendars like the Mayan, Aztec and Inca, and the more recent Julian calendar[1]. Notably, Russia and Greece only converted to the Gregorian calendar from the Julian calendar in the 20th century. Further, it has not always been the case that someone in Bangalore could so easily work out what exactly time it is for me in San Francisco. It was only in the 1880’s that a uniform time was imposed in Britain; until then, time in Britain varied according to location. This local time stood in contrast to ‘ London time ’ (i.e. Greenwich Mean Time (GMT)); Oxford was five minutes behind London, while Plymouth was twenty minutes behind London[2]. In his book The Culture of Time and Space 1880-1918 Stephen Kern writes of the railroads in the US, “Around 1870 if a traveler from Washington to San Francisco set his watch in every town he passed through, he would set it over 200 times”. The railroads instituted uniform time on November 18, 1883. In 1884 Greenwich was established to be the zero meridian and the 24 time zones one hour apart were established. Countries signed up to this structuring of time one by one: Japan in 1888, Belgium and Holland in 1892, Germany, Austro-Hungary and Italy in 1893. At the International Conference on Time in 1912 the telegraph was proposed to be the method of maintaining accurate time signals and transmitting them around the world; astronomical readings were to be taken and sent to the Eiffel Tower that would relay them to eight stations spaced over the globe. This process was inaugurated on July 1st 1913 at 10am. Global time was born, and the death knell rang for the quaint custom of local time. In an odd way, we can thus trace our globally shared, personal and corporate calendars back to the railroads for instigating the rationalization of time across the globe. It’s quite fitting, therefore, that missing the train is foremost in Gregor’s mind when he wakes up.

However, while synchronised global time connects us, it is all too easy sometimes to forget that there are in fact a number of calendars in operation in parallel today–Chinese, Hebrew and Islamic are just three examples.

As I turn back to my missed meeting, I note that calendars have ecclesiastical origins; the Book of Hours structured time into routines for work and worship for monks in the Benedictine order. However, in sharp contrast to the quiet, stable regularity of the liturgical life, my calendar is a chaotic beast in constant need of maintenance and management. Meetings pop on and off like jumping beans as the hoping-to-be-assembled try to find a time that works for all concerned. Vigilence is required lest one is triply booked, and priorities are always being calculated: Is this meeting more important than that one but if so and so is there then that is a good opportunity to get things moving forward…… Oh no, now they are not going to be there after all and yet I am committed to going, how do I shift this around…… and on and on.

The root of the problem lies in the multiples–multiple calendars and multiple people on one calendar. For the first point, I have too many calendars and the effective synchronization of my calendars is not a solved problem. Ghost (long departed/deleted) meetings haunt the calendar on my computer, while my mobile phone presents a suspiciously clean blank slate. Sometimes there is little correspondence between the two, despite their notionally being jacked in to the same server. For the second point, shared calendars (such a good idea in principle) are a gargantuan, social rogue elephant. Herein lie clashes in culture, herein lie power relationships and herein lie a network of complex dependencies. Routine issues arise for me in the following forms: blank space on the calendar, the curse of durational rigidity, the clash between sociotemporal and biotemporal time, and the problem of travel time. Lets briefly review each of these…..

Idle’ time People routinely look at my calendar to determine when I am free to meet; they plop meetings on my calendar based on what they see as ‘free’ time. This is based on a fallacious assumption–that if there is nothing recorded there, then I am free. This is a misreading of my practice of calendar use. Booked times on my calendar are not simply islands of colour in a collaborative paint-by-numbers schematic where the blanks are inviting others to fill them in–I saw a gap so I filled it.

Of course, idle time is anathema to the shared calendar in a culture where to be not actively doing could possibly be interpreted as shirking. In my view, day of back-to-back meetings means there is too little time for creative thought or for reflection. Research indicates that time when one is doing the least, as for example when meditating, is when the most creative moments can occur[3]. The jammed calendar, continual context-switching and mad dashes from one location to another are emotionally draining, mania inducing and counter to creativity.

So I sometimes put “meetings” onto my calendar to simply block some thinking time. I feel sheepish about this. I am reminded of a friend of mine, who, when we were teenagers, used to write things like “peas and carrots for tea” in her journal. Recording peas and carrots was not because of some dietary obsession, they stood in as code for ‘held hands’ and ‘kissed’, reporting on her teenage encounters with her boyfriend; the code was invented lest her mother should read her journal and be mortified by her teenage explorations. So, it is that I transform thinking, writing and reading into ‘Strategy’ and ‘Planning’, appropriate behaviours for a corporate context. Durkheim and followers are correct: how one manages one’s time is an issue of morality and social accountability, not just temporal coordination. It’s a tricky business.

Durational rigidity For the operationally minded, a meeting that is scheduled for an hour must last an hour even when nothing is being achieved. On the other side of that, sometimes one can be just warming up, just getting to the crux of a problem and the hour is up, the meeting has to end truncating the creative process.

Travel time Another problem, and one where a simple technical solution would help out, is travel time between locations. When one works in several different office buildings that are miles apart, it takes time to get from one to the other. It would be useful if I could hook my calendar up to these locations, and have travel time calculated and reflected automatically. So if a meeting is dropped onto my calendar, travel time is automatically blocked in–in fact, I could imagine a lot of background calculating that can be done by hooking my calendar up to location and to my social services and applications[4].

Biotemporal time Working across time zones can be really hard. The cheerful calendar flattens time, it sees all times as equal. Calendars are simply tabulated time in a grid, they do not reflect lived time. Odd times for calls can sneak in there, creating social and personal dilemmas–I want to be a good citizen but I know I am going to be less than my best at that time. Sociotemporal times (as in when it is appropriate to be working and when not) clashes here with biotemporal time. Being on a meeting conference call when your body and your entire environment tells you that you should be sleeping is simply hard. Time may be global but my body is not.

None of my observations are earth-shatteringly novel. There has been a wealth of research in the HCI community from the early 1980’s and continuing now today, on life scheduling and calendaring–in collocated and in distributed workgoups, in the home, in leisure groups, within families, between families, on paper, across paper and other devices, on personal computers, using mobiles, using location services and with visual front end experiences including3D representations. Just to name a few of the research directions. There are typologies of calender user type such as that offered by Carmen Neustaedter and colleagues who call out three different types of families—assigning them to the categories monocentric, pericentric, and polycentric according to the level of family involvement in the calendaring process. Monocentric families are those where the routine is centered on a primary scheduler, pericentric families have the calendar routine centered on the primary scheduler with infrequent involvement by secondary schedulers and polycentric families are those where the calendar routine is still centered on the primary schedulers, yet secondary schedulers are now frequently involved. BUT despite all this work, there’s still plenty we can do in the world of sociotechnical design to rethink the calendar. My calendar does not feel “_centric” in any way; it feels chaotic.

“We shape our dwellings and afterward our dwellings shape us” said Winston Churchill in 1943. We could apply this observation to time; we shaped the calendar and now the calendar shapes us, it dictates how we (should) live. True to Louis Sullivan’s adage form follows function, the digital calendar wears its assumptions and its intellectual heritage on its sleeve: computer science, psychology, information architecture and the ethical structure of the approved-of day. Perhaps we need a new tack.

In Branko Lukic’s and Barry Katz’s 2011 text, Nonobject, they explore product designs that sit at the interstices of philosophy and technology. They step back from simplistic notions of form and function to shake up how we think about products, to question what is ‘normal’ or taken for granted, and to question the values that are embedded within the typical form of everyday artifacts. In a section entitled Overclocked, they explore clocks and watches, our time-keepers. Katz writes, “as our measuring devices grow ever more accurate, we find ourselves perpetually “overclocked” to use a term familiar to every computer hacker who has ratcheted up a component to run at a higher clock speed than it was intended for in order to coax higher performance out of a system. We do the same to ourselves.” A number of designs are presented: the Tick-Tock Inner Clock that taps against the skin to let someone feel the passage of time and the Clock Book where time is laid out on pages we can turn–when we want to–push. Lukic’s watches and clocks invite us to rethink we conceptualize, represent and manage time. Somewhat less extreme but nevertheless taking a playful take on clock design, Alice Wang’s 2009 suggestion for the Tyrant alarm clock is brilliant. This alarm clock calls people from your address book on your mobile phone every three minutes if you don’t get up and turn it off; with this, Wang is betting that the anxiety of broadcasting your slothful habits to anyone in your address book will propel you to get up. Wang gleefully reports that it is the social guilt that will get people moving out of bed. Social anxiety has long been a driver for action; this is I think a nice example of it, and this is a step beyond thinking instrumentally about the clock’s utility/function in isolation from the rest of one’s life.

Let’s do the same thing with calendars. Let’s take a step back. Let’s follow Lukic and take our lead from Architectura Da Carta, the Italian tradition of articulating and illustrating the unlikely, the unbuilt and the unbuildable. Let’s use art, philosophy and technological creativity to envision a better aesthetic experience, to blast the calendar apart and rebuild it; let’s be better about enabling the plularity of private and public times that humans live in parallel; let’s automate the calculation of time in motion between location(s); let’s build in time for creativity and reflection as social and moral imperative; let’s make a calendar that adapts the schedule when it realizes you have woken up having metamorphised into a sentient dung beetle.


[1] See Anthony Aveni Empires of Time. Calendars, Clocks and Cultures, New York: Basic Books, 1953

[2] See Journal of Design History Vol. 22 No. 2 Designing Time: The Design and Use of Nineteenth-Century Transport Timetables by Mike Esbester

[3] See for example The neuropsychological connection between creativity and meditation published in ‘Creativity Research Journal’, 2009 by Roy Horan

[4] See Lovett and colleagues on this in their Ubicomp 2010 paper: The Calendar as a Sensor: Analysis and Improvement Using Data Fusion with Social Networks and Location

Socializing at Cross Purposes

[This is a draft of a column I wrote for the ACM’s interactions magazine. It can out in Volume 17 Issue 1, January + February 2010. The full, final, text with glorious imagery can be found here.]

______________________________________________________

Indulge me for a moment. I have a series of jokes I want to tell you:

  • How many social scientists does it take to screw in a lightbulb? None. They do not change lightbulbs; they search for the root cause of why the last one went out.
  • How many simulationists does it take to replace a lightbulb? There’s no finite number. Each one builds a fully validated model, but the light never actually goes on.
  • How many statisticians does it take to screw in a lightbulb? We really don’t know yet. Our entire sample was skewed to the left.

So what’s with the (not particularly funny) jokes? The point is that they play off particular ways of thinking. In doing so, they show us how different the world can appear, depending on the perspective.

This was evident in a recent meeting. Reminiscent of another set of common (and usually also not funny) jokes involving different nationalities walking into a bar, there were six people in a room: an interaction designer, a statistician with an interest in behavioral modeling, a social scientist, a computer scientist, an self-described “back end with a touch of front end” engineer, and a business executive. We were brainstorming about the potential for accessing social Web applications from personal mobile devices.

Two minutes into our conversation, I said, “We should start with some sound social principles.” This was my bland opening gambit, a preface. Or so I thought. Once I paused for a fraction of a second, everyone started talking at once—like whippets after the faux rabbit at a dog race, the conversation was off. Then it stopped, followed by blank looks. To mix my metaphors horribly: The conversation plumed, spiraled, and evaporated like the contrails of the Blue Angels on July 4th.

The problem was the word “social.”

A quick perusal of the dictionary yielded these definitions of social: relating to human society and its members, living together or enjoying life in communities or organized groups, tending to move or live together in groups or colonies of the same kind, and living or liking to live with others, disposed to friendly intercourse. Etymologically, the word derives from the Latin socialis, meaning “united,” “living with others,” and sequi, meaning “follower,” which should make contemporary social Web application designers happy.

The famous 17th-century philosopher John Locke spoke of “social” as meaning “pertaining to society as a natural condition of human life.” And as an adjective, “social” appears as: “social climber” (starting in 1926); “social work” (1890); “social worker” (1904); “social drink(ing)” (1976); “social studies” as an inclusive term for history, geography, economics (1938); and a concept close to our hearts in these hard times, “social security” as a “system of state support for needy citizens” (1908). That is the backdrop to the conversation I thought I was starting. However…

To the interaction designer, “social” invoked “social Web applications” and all that it means for human interaction with voting (thumbs up and down), favoriting (stars), contact lists and buddy lists, followers, avatars and profiles, chat threading, commenting, recommendations, and view counts. It meant a discussion of icons that suggested (or were derivative of) those on successful social media sites and multimedia content upload and sharing. Talk poured forth about social games and questionnaires, pokes and winks and friending. Let me be clear about my position: I love thinking about these issues, and have recently reviewed drafts for two excellent books in this area—Building Social Web Applications by Gavin Bell and Designing Social Interfaces by Christian Crumlish and Erin Malone. But for the purposes of this meeting, tossing out all of these concepts was great, but it was also putting the cart before the horse. We’d get there, but not yet.

To the computer scientist, “social” sparked sweet, seductive imaginings of the social graph. Wikipedia defines a social graph by explaining that “a social network is a social structure made of individuals (or organizations) called ‘nodes,’ which are tied (connected) by one or more specific types of interdependency, such as friendship, kinship, financial exchange, dislike, sexual relationships, or relationships of beliefs, knowledge or prestige.” The entry continues: “Social network analysis views social relationships in terms of network theory about nodes and ties. Nodes are the individual actors within the networks, and ties are the relationships between the actors. The resulting graph-based structures are often very complex.” No kidding. I love the complexity and curiosity of my species—human beings—and these ways of conceiving social relationships often strike me as dismayingly reductive. They’re very useful within their bounds, but they are summary abstractions of the lyrical complexity of everyday social life. We had a very fruitful foray at this meeting into social recommendations and boundaries—the complexity of “friend relations” and “access control privileges”; the connections between objects via hash tables; and connections between people, their stuff, and other people’s stuff. We discussed these things as inadequate approximations for supporting the negotiated and fluid natures of social trust relationships and the subtle boundaries we negotiate with others.

Somewhat related, my colleague with statistical training was excited to introduce aggregate behavioral models from activity data and collective intelligence from explicit data in our discussion about contemporary notions of “harnessing the hive.” We pressed through issues in database design and the potential for data mining, as well as the relevance, recommendation, and algorithms for automatically inferring buzz, interest, and so on. Here, “social” was to be found in the shadows cast by humans clicking, clacking, typing, uploading across the interconnected networks of the Internet, and making connections betwixt and between things that were heretofore not there to be connected, or at least not visibly so. We discussed how we sometimes derive models—hypotheses, really—about behavior from these obscured traces and how we are sometimes fooled into seeing patterns where there in fact are none.

I won’t enumerate all views expressed, but surely you get the point. I also don’t want to pretend I was seeing the whole picture as this conversation was unfolding. I was as seduced by all these viewpoints, just as my colleagues were entranced by possibilities for implementation and creation. But later—and it was some time later—when I pondered how this short conversation had played out, I realized that we had all collectively engaged in a “we could build/implement/create/design/make” discussion. I had intended to have a conversation at a higher level—one that addressed what people really need, or what would be really helpful and valuable to people. Stepping back even further in the ideation process, I would have liked a conversation about outlining which spaces to probe to see what people need.

Instead, we were enacting the classic “ready, fire, aim!” approach to design that has been a parody of innovation for many years now—design it because you can, stitch together what you already know how to do, throw it out and see what sticks. This is perhaps a natural human tendency because really creative thinking is hard. In a 1986 paper entitled “No Silver Bullet,” Fred Brooks differentiates software design and development processes; he calls them “essential” and “accidental.” “Essential” refers to the difficult part of understanding the domain for which we’re building software and the determination of what software to build in that domain—what activities to inspire, improve, or transform. “Accidental” is the programming and process that has to follow to implement the solution that has been devised. Brooks aptly points out that we have become quite good at training people and designing tools for the accidental part of software development, but the ideation part continues to loom. He succinctly states, “The hardest single part of software development [remains] deciding precisely what to build.”

So what I meant to inspire when I dropped my word-bomb was a discussion of the role that such a device or application could play in everyday life. I was talking about how whatever we propose fits into people’s social context, into how they manage their everyday doings. I was talking about people, about relationships and friendships, and about the contexts that we inhabit and co-create as we move through daily life. I was hoping to focus on the social settings that would afford, support, and allow a technology to be used. I was talking about delivering value in situ without disrupting the situ—or at least giving some thought to designing ethically; to considering the positive versus negative knock-on effects—any disruptions to the social status quo we wanted to inspire and those we did not. I was talking about social norms in the places people frequent. Whether or not, for example, it is socially acceptable to whip out said devices and applications in public. I was not meaning to talk about ‘like’ buttons and ‘share’ buttons and diffusion models. I was talking about the whys and wherefores, not the hows, whats and whens.

I was silently channeling ergonomists and social scientists who address the physical and/or social constraints of the situation, which can in fact mean that an attention-grabbing application would be dropped in favor of physical and social comfort. I was talking about the potential contexts of use for any application we would build.

I am not saying we should have indulged in an infinite regress into consequences of every design decision, but I am saying it is worth engaging in chain-reaction thinking. I was, however, inviting us to understand what the problem was, where there would be something useful for people, before we start coming up with solutions. It seemed to me that we needed to understand what we were designing for before we starting designing.

I just started the conversation poorly.

I am trying to make a bigger point here, beyond the natural human tendency to trot out well-worn, already known processes, and avoid thinking about hard things. I am also highlighting different ways of seeing the world. I am not a social linguist, but in my gathering there was a clash of understandings. Even in a group that believes it is aiming for the same goal, words mean different things to different people and provoke different semantic associations. This has consequences for collaborative, multidisciplinary design teams. People from different disciplines come from different “epistemic cultures”, a concept I am borrowing (and hopefuly not mangling too much) from Karin Knorr Cetina’s book on the ways in knowledge is created in scientific endeavours. Epistemic cultures are shaped by affinity and necessity; by the ways in which things that arise are constructed as issues or problems or growth opportunities; and also by historical coincidence. An epistemic culture determines how we know what we know. Think of places you have worked: how different organizational cultures think about and institute incentives, employee evaluation, teamwork, work-life balance, and so on. In the same way, “social” may be measured, used, and applied differently in different epistemic communities or cultures. We simply need to be alert to that fact. What this means practically is that we must be aware of the need for translation and persuasion for human-centered design in contexts where multiple parties are present—brokering between different epistemic communities and different constituencies in the design space. It also means that if one is to carry a vision from ideation to implementation and release, one had better be present for all the “decision” gates—be careful about how ideas are being understood as an application or the artifact goes from inspiration to innovation. Because all along the way, the initial vision will slippy-slide away into the details, and those details may consume the whole. Rube Goldberg and W. Heath Robinson come to mind—creators of fantastic contraptions that maybe, perhaps, get the job done, but in the most circuitous manner possible, honoring the engineering over and above utility and aesthetics.

Before I take my leave, let me ask you, as an illustration of all the different perspectives on the simplest of technologies, how many designers does it take to change a lightbulb?

Well, normally one…but:

+1 because we need to user-test the lightbulb
+1 because marketing wants to build the box and brand the bulb
+1 because sales wants to go global with the lightbulb
+1 because engineering wants details specs
+1 since we need to evangelize about it
+1 since we need to build a user community around the lightbulb
+1 since we need to explore different directions we can take the lightbulb

The golden age of newsprint collides with the gilt age of internet news

[This is an early draft of a column I wrote for ACM’s interactions magazine. It appeared here and the final version is available from here. It appeared in Volume 16 Issue 4, July + August 2009 of the magazine].

_______________________________________________________

Sitting in the Economy Class seat on a United Airlines flight, I ducked for the third time as the gentleman next to me struggled to turn the page of his broadsheet newspaper.

While he was assimilating what was happening in the world, I was contemplating the unfortunate juxtaposition of two iconic forms – the over-sized broadsheet newspaper and the undersized airline seat – and the current state of two industries that find themselves in deep financial trouble.

News stories. Crosswords. Horoscopes. Book reviews. Political cartoons. Recipes. Print-dirtied fingers. Papier mache. Stuffing sodden shoes. Wrapping fish and chips. Ad hoc packing materials. Fire kindling. These are things that I think about when I think of newspapers. And despite the fact that I could never quite physically control a broadsheet without the aid of a table, I cannot believe that this everyday artifact may go away. But according to my friends here in the digiphilic environment of San Francisco it is inevitable – you can’t walk into a coffee shop, never mind turn on a TV or the radio without hearing someone opine about economic crisis that newspapers are facing and the likely disappearance of the daily rag. I am as shocked and mortified by this as I was by the 2003 news story that bananas may be extinct by 2013.

Newspapers have a long history. The first printed forerunners of the newspaper appeared in Germany in the late 1400’s in the form of news pamphlets or broadsides, often highly sensationalized in content. In Renaissance Europe handwritten newsletters circulated privately among merchants, passing along information about everything from wars and economic conditions to social customs and “human interest” features. In 1556 the Venetian government published Notizie scritte, for which readers paid a small coin, or “gazetta”. The earliest predecessors of the newspaper, the corantos, were small news pamphlets that were produced only when some event worthy of notice occurred. In the first half of the 17th century, newspapers began to appear as regular and frequent publications. The first modern newspapers were products of western European countries like Germany (publishing Relation in 1605), France (Gazette in 1631), Belgium (Nieuwe Tijdingen in 1616) and England (the London Gazette, founded in 1665, is still published as a court journal). These periodicals consisted mainly of news items from Europe, and occasionally included information from America or Asia. They rarely covered domestic issues; instead English papers reported on French military blunders while French papers covered the latest British royal scandal. Newspaper content began to shift toward more local issues in the latter half of the 17th century. Still, censorship was widespread and newspapers were rarely permitted to discuss events that might incite citizens to opposition. Sweden was the first country to pass a law protecting press freedom in 1766. Timeliness was always an issue; news could take months to reach audiences. The invention of the telegraph in 1844 transformed print media. Now information could be transferred within a matter of minutes, allowing for more timely, relevant reporting, and newspapers appeared societies around the world. This was truly a revolution.

The Internet is bringing about an even bigger revolution: timeliness, open rather than controlled information sharing and easy access. This shake-up is bigger than any other that has been faced in the last 100 years from the likes of radio and television. Broadcast radio in the 1920’s was low-cost with broad distribution, and content delivery was often more timely. The newspapers responded by adding content that was not so easily represented through audio waves, providing more in-depth and visually vivid coverage of key stories. As the 1940’s and 1950’s came around, television appeared as the main challenger. Newspapers again responded, taking from television the short, pithy story format. Newspapers like USA Today responded with graphics and colour imagery. More generally, news publications started diversifying their content, mixing human interest stories with puzzles, crosswords, book reviews, cartoons, cooking recipes and all the good stuff we have grown to love. Newspapers became about browsing, grazing, sharing, surfing, with content that satisfied immediate information needs and longer-term general interests.

Despite radio and television, newspapers managed to retain their position in the information value chain. Not so anymore. There are three interrelated causes for this shift in the information ecosphere: internet-related innovations in news production and news dissemination; the impact of new digital devices that are changing the ways in which content is consumed; and a no longer viable business model.

Lets quickly look at these in turn. It is obvious that the Internet has revolutionized news production and dissemination. Speedy transmission of information around the globe means news can reach us as events are unfolding – hot off the keyboard rather than the press with images and video for that “being there” feeling. “Citizen journalists” give us the lay-person’s perspective on events that journalists cannot or have not yet reached. Indeed, reports from various disasters from the fires in California to the shootings in Mumbai came for many people first from Twitter, the micoblogging service that is currently the darling of the media and blogs from Iraq told us much more that we could possibly find out from our daily newspapers. The efficiency and effectiveness of this interconnected internet world cannot be denied. Production and consumption of news has also been transformed by the explosion of lightweight, wireless, internet-enabled recording and reading devices, plus the proliferation of computers in the home and in offices. Finally, the old business model is failing. The newspaper industry in the US has been generating most of its revenue from advertising for decades. The global recession and the resulting decline in advertizing revenues has dealt a possibly fatal blow; the Newspaper Association of America reports that in 2008 the total advertising revenues declined 16.6 percent to $37.85 billion, representing a $7.5 billion reduction on numbers for 2007. Proposals on the table for saving the industry now include micropayment schemes plus bailout and/or government subsidies.

I don’t feel qualified to assess the likelihood of success for the various bailout schemes from micropayments to government bailouts, and for the purposes of this column, I will not go into the importance of ensuring we don’t lose good journalistic practice. But I am really worried about what is happening in the world of news, because I am screaming for a better news reading experience on my desktop and mobile devices. What the news industry at its best did really well is missing from the online reading experience: easy navigation of well filtered content plus effective selection and segmentation of content plus a clear voice/view of the publication. Can we take the best of what we had in newsprint and create a good digital news reading experience? Here are some basics I would like to work on:

(1) information quality- can we provide better tools for the collection and management of information gathered on the ground that would aid with quality and provide guidelines for the coupling of different media types (text, imagery, video) to avoid gratuitous visuals? Let’s be active in designing better technologies for production of the news by citizen and professional journalists and editors.

(2) information architecture design: can we design better relational models so we can surface relationships between stories that are actually meaningful instead of the ‘also see’ hyperlink that takes me to a story from 5 years ago that somehow got linked to the current one? Can we design better tools for following story developments, for enabling the creation of narrative by producers and consumers?

(3) can we improve the representation of information – graphics, fonts, layouts – so that it is possible to skim more effectively?

(4) can we design for reading the news – what is next after the Kindle? Is electronic paper or Xerox’s promised reprintable paper going to be a reality so I can have the large gesture, embodied experience of the broadsheet back and decent screen real estate for laying out content?

(5) can we design anything better than the crass, ugly and inconvenient model of url bookmarking to support different temporalities of information usefulness and different consumption paces, and for slow-burn stories to persist while fast-burn stories are updated with new content?

In sum, we should more deeply address the practices of news readership. We should design for convenience and skimming. We should design filters and surfacers of quirky items or items that for some reason search algorithms find unpalatable. We should develop better editorial tools than we currently have.

I am not alone in wanting some good design heads on these problems. Addressing people’s everyday news consumption practices, a 2008 Associated Press ethnographic study cited email and internet-based sources as a mainstay many young people’s experience of the news. However, these interviewees, plus interviewees in a study I am currently running in the Bay Area all talk about the “work” of reading the news online and that “news fatigue” is increasing. What this seems to boil down to is that there are plenty of places to find news on the internet, but in all this bacchanalian information glut the shallow story dominates, often it is hard to find the follow-up to a reported news item, and there is a lot of repetition. To the last point, the Project for Excellence in Journalism observed in their 2006 State of the News Media report that 14,000 unique stories were found on an internet news aggregator site in one 24 hour period, there were in fact only 24 discrete news events. There is vastly more content available of course, and things have improved somewhat since 2006, but that other content is relatively speaking hard to find. And online content usually does not offer the structured, well-designed experience that its printed counterpart does. Ethan Zuckerman of the Berkman Institute blogs about his experience of a national newspaper’s online presence “…counting possible links (using a search for anchor tags in the source HTML), there are 423 other webpages linked from the front page. A more careful count, ignoring ads, links to RSS feeds and links to account tools for online readers, gives 315 content links, possible stories or sections a reader could explore from the front page. While there are almost 14 times as many pages for a reader to explore, they’ve got much less information on what links to follow: while twelve stories have text hooks, the wordcount ranges between 10 and 26 words. While there’s a good chance one of those stories might convince you to click on it, you won’t start reading it on the front page, the way you might with the 200-400 word stories in the paper edition.”

I just replicated his analysis by looking at three online papers. He’s right.

But let’s not get ahead of ourselves or judge too early. We are only at the beginning. Right now we are at the stage the car was in the last 1800s and early 1900s – in shape and form reproducing the horse drawn carriage, not yet having found its own aesthetic reflective of its infrastructure and capability.

Newspaper companies are on board with enlisting others to aid in the design of the next generation of news forms. In early 2009, the New York Times Developer Network hosted its first API seminar so we can start designing and building new forms of content provision. The aim is to make the entire newspaper “programmable”. Programmers will be able to mash up the paper’s structured content — reviews, event listings, recipes, and so on. This is a great opportunity for those immersed in information and experience design.

Some are for example pushing on really nice “read later” and bookmarking facilities. This is a good start. But you still have to know how to search for content, and spend time doing that. In my opinion, today’s aggregators/algorithms and automatically updated webpages are in no way shape or form replacing the work done by a good editor and a good layout designer.

As I have been thinking about news, I conducted an informal review of a couple of local and national newspapers. Bearing in mind that I am only am interloper who is curious because reading the news is integral to my identity, I took a quick review of the differences between online and print versions. I could not discern any consistent re-representations between online and print. I wonder: Are there standard reformulations and standard channel “jumps” for different content types? Who is making those decisions and how? How is the “shelf-life” or temporal relevance affected by the channel or the medium? And…… what of the services that were once the purview of the local newspaper have not been reformulated or replicated? What has happened to the content that once constituted the local daily paper? How has it morphed and reformed? Finally, and of course, I note that I have not seen many people argue for the fact that the very form of the newspaper, it’s affordances (size not withstanding), may be missed. No indeed, the print form of the newspaper still has affordances that cannot be matched by the digital medium. These are, to me at least, complementary forms, but perhaps I am in the minority; I still print things out rather than reading them on a screen.

And, there is another affordance about paper. It gets left lying around, apparently discarded but ripe for re-reading. That is not the case with contemporary digital devices, although who knows where the future may take us as device manufacturing gets cheaper. Information left lying around for others to consume is important. It allows others who are idling to encounter than which they may not have otherwise; to be literal, if I sit on a subway train and out of curiosity read the newspaper that has been discarded on the seat next to me, I am encountering something I did not choose, that was not filtered for me. And just perhaps, I will learn something unexpected.

Addressing issues in the creation and dissemination of news is important; it is not enough to say that late adopters or those who not actively seek the news as opposed to having it literally pushed through the letter box should catch up with us digerati and get those phone applications downloaded. Making it harder to get access to information affects civic engagement. Following the closure of the Cincinnati Post in late 2007, Princeton University economists Sam Schulhofer-Wohl and Miguel Garrido showed a decline in people voting in elections, fewer candidates running in opposition to the incumbents, and less knowledge of and debate around issues and policies supported by the incumbents. If democracy in some sense depends on an informed electorate, then making it harder for people to easily find digestible but detailed and well-balanced arguments is a serious problem. Even if you don’t agree with everything you read in a newspaper, encountering things that you have not actively selected broadens your outlook, the flip-side of how filtering and narrowing can save time. Specifically, filtering saves time, but also shuts down challenges to assumptions, and it is those challenges that help us grow and create the debate of a functioning, democratic society. Newspapers can of course do the opposite; they can function to bring a community together in a shared narrative (whether they agree with the narrative or not will drive whether then accept or fight it). Moving to purely digital forms, some say could increase ‘discovery’ problems for the non-digerati, and thus the number of ‘news dropouts’.

So, I confess that personally, I love the materiality of a good broadsheet newspaper and of the magazines that I read. And it annoys me just a little that, thanks to my beloved Kindle electronic reading device, I don’t have newspaper lying around the house to stuff my rain sodden shoes. But I am also looking forward to a world with better designed digital news formats. What we need is some technical savvy, a design sensibility and a deeper human-centered understanding the Gestalt of news consumption in practice between and across representational forms. We need something more that the current state of the art, which offers us only the most superficial, easy to implement, of technical convergences. We need more than the horseless carriage of digital news.

Keep your hair on: Designed and emergent interactions for graphical virtual worlds

[This is a draft I wrote for my column, P’s&Q’s in ACMs interactions magazine. For it, I interviewed Bob Moore who was at the time with now defunct Multiverse. It came out in Volume 15 Issue 3, May + June 2008. The final version can be found here.]

______________________________________________________

Chatting with virtual world researchers Jeffrey Bardzell and Shaowen Bardzell, I found out that a seriously desired artifact in Second Life, not unlike in First Life, is hair. Swishy, shiny hair. But there is one problem with this real-dreamy hair: Such hair is computationally costly compared to render in comparison to your average avatar body. And so, sadly, your avatar body arrives before your hair. For a matter of moments, no matter how fashionable the outfits, everyone is bald as coots.

And it’s worse than that. Often from other players’ perspectives, your hair has failed to “rez,” but not from your perspective. So you think you look hot-REALLY hot–until, that is, some newbie says, “Why are you bald?”

There are now apparently a lot of people routinely spending time hanging out in virtual worlds. MMO Crunch reported 36 million regularly active MMORPG players in August 2007. And they are getting more popular.

“Virtual worlds are fundamentally a medium for social interaction. One that takes the face-to-face conversation as its metaphor. As such it leverages users’ common sense knowledge: I see a humanoid avatar and I know that if I want to talk to that player, I should approach his or her avatar with my own” says Bob Moore interaction researcher and designer at Multiverse. As with all metaphors, however, there are fractures in the metaphor. Which matter and which don’t? When do we happily immerse and feel like we are really there and when do we get amusingly or annoyingly jolted by something that doesn’t work? What are key things designers are thinking about right now? Bob and I chatted for a while, reflecting on the work that has been previously done on the research areas of graphical worlds, collaborative virtual environments (CVEs) and online gaming environments. Given my own background in researching text-based virtual worlds and animated interface characters, I was intrigued to her Bob’s perspective. Here are some of the main points we covered in our conversation:

(1) For virtual worlds, user interface design is also social interaction design

Bob argues that one thing UI designers for virtual worlds have not yet fully grasped is that when users are interacting with the system, they are often at the same time interacting with other users. When standing avatar-to-avatar, if you ask me, “Did you get the sword?” and I promptly open my inventory, that UI action is a relevant part of the interactional context. Or if I say, “Let’s go” and you promptly open your map, that’s a relevant next action that I should know about. In almost all current virtual worlds, opening your inventory or your map triggers no public cues, only private cues for the individual user. So you not only need to give the individual user feedback about what the system is doing, also need to give the other players feedback about what the individual user is doing. In other words, users’ interactions with the system should be made public.

(2) Where avatar bodies are not like physical bodies

Unlike real life, most people tend to play virtual worlds with their camera view zoomed back so they can see their avatars rather than in true first-person view where you can only occasionally see your hands or legs. There are a couple of good reasons for this. Computer screens don’t allow for peripheral vision. But pulling back the camera can help mitigate this limitation by widening your field of view. Similarly, as Bob points out, avatars don’t allow for proprioception, or our awareness of the positions of our body. Zooming back the camera also helps players deal with this fact. I may think my avatar should be waving like the Queen of England or winking flirtatiously at someone else because I typed /wave or /wink, but it’s hard to be sure if I can’t see my avatar. Maybe I mistyped the commands or maybe the animation associated with the command actually looks more like a New Yorker hailing a taxi than Her Royal Highness.

(3) When the conversation lags behind the avatars

Despite a lot of the interesting audio experiments that were being conducted over 15 years ago–see Benford, S. D. and Fahlén, L. E., Awareness, Focus, Nimbus and Aura – A Spatial Model of Interaction in Virtual Worlds, Proc. HCI International ’93, Orlando, Florida, 1993 as a good example–most virtual world conversations take place through chat. The avatars may be wandering about gesturing and wiggling their hips, but chat does not come out in audio from the mouth of the avatar. One problem this causes is the discontinuity, the lack of congruence, between action and uttered words. Bob points out that typing a chat message is another kind of action that other players should know about, and he recounts a case in which a team of players is about to attack a group of “mobs,” or computer-controlled opponents. While one player is composing a question about how the team might change its tactics, a fellow player initiates combat. The tactical question then publicly appears too late. The pseudo-synchronous chat lags behind the synchronous avatars. Bob, who was trained in Conversation Analysis, explains that a key feature of real-life conversation is that you can hear a turn unfolding in real-time. This enables you to do things like determine who should speak next, anticipate precisely when the turn will end so you can start your next turn with minimal gap and overlap, and even preempt the completion of the current speaker’s turn if you don’t like the direction its going. In other words, the ability to monitor other people’s turns-in-progress is a requirement for tight coordination in conversation. Most virtual worlds (with the exception of There) use IRC- or IM-style chat systems, and therefore, do not allow players to achieve this tight coordination among their turns-at-chat and avatar actions. The result is an interactional experience that feels very unnatural (at first) and which motivates players to invent workarounds to the system.

(4) Perennials of place

One of the amazing things about virtual worlds is how quickly we get a sense of being co-present in a place with other people, even though it may be an image on a screen, a world into which we are kind of peering. And, as in the real world, ambience is created by building and room size and scale in relation to crowd size. In my research on MUDs/MOOs with Jeni Tennison and then later work with Sara Bly, we found that even very simple text exchanges in textually described “rooms” can make dyads and groups feel co-present, imersively in the virtual world together. Humans tend to get engaged with each other as long as there is some consistent chain of action-reaction. Indeed some analysts would argue that turn-taking in conversation is the fundamental unit of human communication and connectedness.

As we explored these concepts, Bob described a comparative ethnographic study he’d done of bars and dance clubs in multiple virtual worlds. He hung out in these social public spaces and analyzed features of the design that impacted the success of the space as a social environment. One key feature of club design is size. Bob discovered that, while construction in these worlds is cheap compared to real life, it is more difficult to fill these spaces with people than it is to fill the real-life urban centers that researcher William H. Whyte examined. As a result, the dance club in City of Heroes and the majority of player-built clubs in Second Life are simply too large. They feel like an airport terminal or concert hall rather than a corner pub.

So in order to achieve the kind of social density necessary for a vibrant social space, or “third place” as academic Ray Oldenburg would call it, designers should make virtual bars and clubs much smaller than they currently do. The most successful virtual third place that Bob discovered was a Second Life bar that was intentionally tiny. In order to get into the place, you had to “rub elbows” with other patrons. The place felt “busy” with only five players and “hoppin’” with twenty. And everyone was within everyone else’s chat radius, which facilitated the public conversation. In other words, lessons from real-life urban design appear to apply in several ways to the design of virtual public places.

(5) A range of skill sets and a modifiable world.

There are challenges in designing a world where “newbies” or newcomers can learn the ropes. If you want to learn about interacting in one, you simply have to get off (or rather for most people onto) the sofa and get in there. Go in-world. It is much easier to learn in-game than to learn out of game, just like learning to play golf requires you take up a golf club and try it. You just can’t learn by watching someone else. Bob says, “I’d recommend getting into a pick-up group and go and doing some adventuring”. There are plenty of folks in-world who are willing to help out, to show off their knowledge. Bob agrees that not everybody likes to help, and admits that whether you are more of less likely to be helped depends on some other factors……you guessed it-having an attractive female avatar means you are more likely to get help.

There are other things to learn aside from interactions and activities. Many virtual worlds allow people to buy, build and exchange things. Second Life is perhaps the primo example, it is a sandbox with a constructive geometry, which enables them to stream your data and create your objects on the fly. But that also means people arrive and build stuff and leave it behind. There is a certain  “I just learned to build today” look that Bob identifies as one of the main scars on the aesthetics of the virtual world. Despite the parallels that some people make between the real live music/art/dance festival known as the Burning Man Project and playful interactions and explorations in Second Life, there is no motto inviting us to “leave no trace” in Second Life. Of course the ecological consequences are somewhat different but the visual aesthetic of clutter and detritis is experienced as the same for Second Life afficionados. So, there is a tension between giving people freedom to build anything they want and making sure the world doesn’t end up looking like the aftermath of an afternoon in a Montessori School for Gremlins. Beyond clean-up though, there is another point. At this juncture the building tools are much better than they were but they still leave a lot to be desired. I say: If we want Gaudi not Brutalism, we need to provide better tools to scaffold the building endeavours of the folks in-world.

To get an understanding of issues like these above and push on understanding how people really experience these virtual/online places, Bob advocates a close and detailed analysis of what is actually going on as it unfolds in real time, looking at patterns of action and interaction, how those patterns develop and are understood, learned and evolved, and identifying patterns that are persistent and prevalent. Too many people have theories about what is going on that are based on something completely external to the situation. “You need to get close to the phenomenon/experience” says Bob. And we need to “be concerned both with the in-world simulation of face-to-face interaction AND the usability of the interface for puppeteering the avatars and interacting with the system.”

By looking at the challenges in interaction that people routinely encounter and work around, it is possible to ask how important – or disruptive to interaction in-world – those challenges are, and propose ways to address them through interaction, interface and system (re)design. I could not agree with him more.

And on that note, it’s time for me to get my hair on and go build a shack.

Reference: Bardzell, S., & Bardzell, J. (2007). Docile avatars: Aesthetics, experience, and sexual interaction in Second Life. Proceedings of British HCI 2007. Lancaster, UK

Abstract again – sampled thoughts

Illusory boundaries in the “cyber-sociality” of virtual teams: ethnographic methods, the offline in the online and cautionary tales of business cyber ethnography.

Abstract:

If cyberspace is “the total interconnectedness of human beings through computers and telecommunication without regard to physical geography” (Gibson, 1984), then cyber-sociality lies in the details of engaging, maintaining and indeed managing this disembodied, mediated interconnectedness, operating simultaneously within multiple “social worlds” (Strauss, 1978). Reacting to the embrace of graphical simulation, the emergence of “virtual reality” and the promise of artificially intelligent agents, Gibson’s dystopian cyber(meaning helmsman in Greek)space is a simulated structured world where one can “jack in”, away from this corporeal world.

“Cyberethnography”, by derivation and colloquial extraction, is the ethnography, the writing of the culture(s) of the computer mediated, tele-sociality of the physically disconnected. We have been using ethnographic methods (cyber and otherwise) to paint in the details of these acts of interconnection in “global corporations”, “virtual teams” and “cybercommerce” settings. Unlike many cyber-ethnographies (but entirely in keeping with ethnography unbounded by mediated or physically collocated locales of activity), we triangulate online and offline observation.

In this paper we present highlights from three case studies, which we believe lie along a continuum of Gibson’s ‘cyber’ness, with more or less latitude for personal agency and modification of the technology itself to manage the tele-mediated interaction. The first is a study of distributed teams collaborating primarily through video conferences and email. The second is a study of collaborative work in a text-based virtual environment where interaction take place mostly online but also face to face. Finally, we present interactions in massively multiplayer environments, where collaboration and commerce are growing, and where control over one’s presence is entirely in the hands of the individual. In all three cases, we present an ecology of communication technologies, but focus on those through the lens of an ecology of flows, spaces, and connection practices – within the context of the broader social settings within which the interactions we have observed take place.

These case studies are used to render visible the often tacit boundaries of ethnographic data collection methods and reportage. While we draw on methods in all cases that have been loosely called “cyber-ethnography”, interested as we are in sociality in mediated situations, we illustrate how an understanding of that which lies beyond the keyboard and screen frames what is understood, and therefore drives new forms of data analysis. Sometimes generating these understandings is positively maddening in its methodological complexity. Humans have always, in fact, lived lives beyond our gaze. But in these studies, we have experienced restrictions at many levels which can be broadly characterized as 1. what can be recorded (logistically, it is getting increasingly important that we are very technically oriented to gather our data; many field sites in business contexts create restrictions that curtail broad data collection; many ethical issues arise); 2. what can be analysed (time is the biggest constraint in many business ethnography settings, and this is amplified in studying these distributed settings), and finally 3. what can be reported (in many settings what is seen cannot be reported or will not be heard).

What does this mean for what we understand of sociality, and what does it mean for reflection of what can and cannot, has and has not been inferred. Ultimately in this paper, we consider what are data, and who owns the data for consent to be given for its collection, analysis and reportage: what does it mean for an avatar, one persona of many even in an organization for example, to grant me permission to record? Just as technology-supported communication generates new work practices, we are experiencing the old phenomena of multiple selves in interaction in new worlds. This paper reflects on the issues involved. Each example will consider 1. the importance for work practice analysis, 2. the need for agility in method, and 3. the importance of deep analysis for patterns over time and technologies.

References
Gibson, W. (1984) Neuromancer. Ace Book.

Strauss, A. (1978). A social worlds perspective. In N. Denzin (ed.), Studies in Symbolic Interaction, vol. 1, Greenwich, CT: JAI Press, 119–128.