Monthly Archives: January 2010

Socializing at Cross Purposes

[This is a draft of a column I wrote for the ACM’s interactions magazine. It can out in Volume 17 Issue 1, January + February 2010. The full, final, text with glorious imagery can be found here.]


Indulge me for a moment. I have a series of jokes I want to tell you:

  • How many social scientists does it take to screw in a lightbulb? None. They do not change lightbulbs; they search for the root cause of why the last one went out.
  • How many simulationists does it take to replace a lightbulb? There’s no finite number. Each one builds a fully validated model, but the light never actually goes on.
  • How many statisticians does it take to screw in a lightbulb? We really don’t know yet. Our entire sample was skewed to the left.

So what’s with the (not particularly funny) jokes? The point is that they play off particular ways of thinking. In doing so, they show us how different the world can appear, depending on the perspective.

This was evident in a recent meeting. Reminiscent of another set of common (and usually also not funny) jokes involving different nationalities walking into a bar, there were six people in a room: an interaction designer, a statistician with an interest in behavioral modeling, a social scientist, a computer scientist, an self-described “back end with a touch of front end” engineer, and a business executive. We were brainstorming about the potential for accessing social Web applications from personal mobile devices.

Two minutes into our conversation, I said, “We should start with some sound social principles.” This was my bland opening gambit, a preface. Or so I thought. Once I paused for a fraction of a second, everyone started talking at once—like whippets after the faux rabbit at a dog race, the conversation was off. Then it stopped, followed by blank looks. To mix my metaphors horribly: The conversation plumed, spiraled, and evaporated like the contrails of the Blue Angels on July 4th.

The problem was the word “social.”

A quick perusal of the dictionary yielded these definitions of social: relating to human society and its members, living together or enjoying life in communities or organized groups, tending to move or live together in groups or colonies of the same kind, and living or liking to live with others, disposed to friendly intercourse. Etymologically, the word derives from the Latin socialis, meaning “united,” “living with others,” and sequi, meaning “follower,” which should make contemporary social Web application designers happy.

The famous 17th-century philosopher John Locke spoke of “social” as meaning “pertaining to society as a natural condition of human life.” And as an adjective, “social” appears as: “social climber” (starting in 1926); “social work” (1890); “social worker” (1904); “social drink(ing)” (1976); “social studies” as an inclusive term for history, geography, economics (1938); and a concept close to our hearts in these hard times, “social security” as a “system of state support for needy citizens” (1908). That is the backdrop to the conversation I thought I was starting. However…

To the interaction designer, “social” invoked “social Web applications” and all that it means for human interaction with voting (thumbs up and down), favoriting (stars), contact lists and buddy lists, followers, avatars and profiles, chat threading, commenting, recommendations, and view counts. It meant a discussion of icons that suggested (or were derivative of) those on successful social media sites and multimedia content upload and sharing. Talk poured forth about social games and questionnaires, pokes and winks and friending. Let me be clear about my position: I love thinking about these issues, and have recently reviewed drafts for two excellent books in this area—Building Social Web Applications by Gavin Bell and Designing Social Interfaces by Christian Crumlish and Erin Malone. But for the purposes of this meeting, tossing out all of these concepts was great, but it was also putting the cart before the horse. We’d get there, but not yet.

To the computer scientist, “social” sparked sweet, seductive imaginings of the social graph. Wikipedia defines a social graph by explaining that “a social network is a social structure made of individuals (or organizations) called ‘nodes,’ which are tied (connected) by one or more specific types of interdependency, such as friendship, kinship, financial exchange, dislike, sexual relationships, or relationships of beliefs, knowledge or prestige.” The entry continues: “Social network analysis views social relationships in terms of network theory about nodes and ties. Nodes are the individual actors within the networks, and ties are the relationships between the actors. The resulting graph-based structures are often very complex.” No kidding. I love the complexity and curiosity of my species—human beings—and these ways of conceiving social relationships often strike me as dismayingly reductive. They’re very useful within their bounds, but they are summary abstractions of the lyrical complexity of everyday social life. We had a very fruitful foray at this meeting into social recommendations and boundaries—the complexity of “friend relations” and “access control privileges”; the connections between objects via hash tables; and connections between people, their stuff, and other people’s stuff. We discussed these things as inadequate approximations for supporting the negotiated and fluid natures of social trust relationships and the subtle boundaries we negotiate with others.

Somewhat related, my colleague with statistical training was excited to introduce aggregate behavioral models from activity data and collective intelligence from explicit data in our discussion about contemporary notions of “harnessing the hive.” We pressed through issues in database design and the potential for data mining, as well as the relevance, recommendation, and algorithms for automatically inferring buzz, interest, and so on. Here, “social” was to be found in the shadows cast by humans clicking, clacking, typing, uploading across the interconnected networks of the Internet, and making connections betwixt and between things that were heretofore not there to be connected, or at least not visibly so. We discussed how we sometimes derive models—hypotheses, really—about behavior from these obscured traces and how we are sometimes fooled into seeing patterns where there in fact are none.

I won’t enumerate all views expressed, but surely you get the point. I also don’t want to pretend I was seeing the whole picture as this conversation was unfolding. I was as seduced by all these viewpoints, just as my colleagues were entranced by possibilities for implementation and creation. But later—and it was some time later—when I pondered how this short conversation had played out, I realized that we had all collectively engaged in a “we could build/implement/create/design/make” discussion. I had intended to have a conversation at a higher level—one that addressed what people really need, or what would be really helpful and valuable to people. Stepping back even further in the ideation process, I would have liked a conversation about outlining which spaces to probe to see what people need.

Instead, we were enacting the classic “ready, fire, aim!” approach to design that has been a parody of innovation for many years now—design it because you can, stitch together what you already know how to do, throw it out and see what sticks. This is perhaps a natural human tendency because really creative thinking is hard. In a 1986 paper entitled “No Silver Bullet,” Fred Brooks differentiates software design and development processes; he calls them “essential” and “accidental.” “Essential” refers to the difficult part of understanding the domain for which we’re building software and the determination of what software to build in that domain—what activities to inspire, improve, or transform. “Accidental” is the programming and process that has to follow to implement the solution that has been devised. Brooks aptly points out that we have become quite good at training people and designing tools for the accidental part of software development, but the ideation part continues to loom. He succinctly states, “The hardest single part of software development [remains] deciding precisely what to build.”

So what I meant to inspire when I dropped my word-bomb was a discussion of the role that such a device or application could play in everyday life. I was talking about how whatever we propose fits into people’s social context, into how they manage their everyday doings. I was talking about people, about relationships and friendships, and about the contexts that we inhabit and co-create as we move through daily life. I was hoping to focus on the social settings that would afford, support, and allow a technology to be used. I was talking about delivering value in situ without disrupting the situ—or at least giving some thought to designing ethically; to considering the positive versus negative knock-on effects—any disruptions to the social status quo we wanted to inspire and those we did not. I was talking about social norms in the places people frequent. Whether or not, for example, it is socially acceptable to whip out said devices and applications in public. I was not meaning to talk about ‘like’ buttons and ‘share’ buttons and diffusion models. I was talking about the whys and wherefores, not the hows, whats and whens.

I was silently channeling ergonomists and social scientists who address the physical and/or social constraints of the situation, which can in fact mean that an attention-grabbing application would be dropped in favor of physical and social comfort. I was talking about the potential contexts of use for any application we would build.

I am not saying we should have indulged in an infinite regress into consequences of every design decision, but I am saying it is worth engaging in chain-reaction thinking. I was, however, inviting us to understand what the problem was, where there would be something useful for people, before we start coming up with solutions. It seemed to me that we needed to understand what we were designing for before we starting designing.

I just started the conversation poorly.

I am trying to make a bigger point here, beyond the natural human tendency to trot out well-worn, already known processes, and avoid thinking about hard things. I am also highlighting different ways of seeing the world. I am not a social linguist, but in my gathering there was a clash of understandings. Even in a group that believes it is aiming for the same goal, words mean different things to different people and provoke different semantic associations. This has consequences for collaborative, multidisciplinary design teams. People from different disciplines come from different “epistemic cultures”, a concept I am borrowing (and hopefuly not mangling too much) from Karin Knorr Cetina’s book on the ways in knowledge is created in scientific endeavours. Epistemic cultures are shaped by affinity and necessity; by the ways in which things that arise are constructed as issues or problems or growth opportunities; and also by historical coincidence. An epistemic culture determines how we know what we know. Think of places you have worked: how different organizational cultures think about and institute incentives, employee evaluation, teamwork, work-life balance, and so on. In the same way, “social” may be measured, used, and applied differently in different epistemic communities or cultures. We simply need to be alert to that fact. What this means practically is that we must be aware of the need for translation and persuasion for human-centered design in contexts where multiple parties are present—brokering between different epistemic communities and different constituencies in the design space. It also means that if one is to carry a vision from ideation to implementation and release, one had better be present for all the “decision” gates—be careful about how ideas are being understood as an application or the artifact goes from inspiration to innovation. Because all along the way, the initial vision will slippy-slide away into the details, and those details may consume the whole. Rube Goldberg and W. Heath Robinson come to mind—creators of fantastic contraptions that maybe, perhaps, get the job done, but in the most circuitous manner possible, honoring the engineering over and above utility and aesthetics.

Before I take my leave, let me ask you, as an illustration of all the different perspectives on the simplest of technologies, how many designers does it take to change a lightbulb?

Well, normally one…but:

+1 because we need to user-test the lightbulb
+1 because marketing wants to build the box and brand the bulb
+1 because sales wants to go global with the lightbulb
+1 because engineering wants details specs
+1 since we need to evangelize about it
+1 since we need to build a user community around the lightbulb
+1 since we need to explore different directions we can take the lightbulb