Posted & filed under Blog

I just finished a marvelous Skype visit with Howard Rheingold’s class of co-learners at Stanford. They asked such probing and interesting questions and I had fun answering and learning about the different ways that class is being co-taught, co-learned.  One theme we all emphasized is that, before we can make comparisons between virtual worlds and real worlds, we need to understand more about the real world. That is, so much cognitive science research is based on highly artificial representations of the “brain” and of “society” and of “the human,” of “attention,” and “interaction,” and the virtual. And, of course, of the nostalgia-ridden before and after, then and now.

Here are some highlights of our conversation:

(1) Don’t we lose a lot, especially touch, when we interact in virtual worlds?  One person asked me about whether I thought that virtual world would ever have the same tangible qualities as face-to-face interactions, a question inspired by Sherry Turkle’s new book. I have much admiration for Turkle’s work, but I’m not sure I buy all the conclusions for this careful, serious study of loneliness in our digital world.  Have we ever not been lonely?   When the Stanford students asked me that question, I turned it inside out by saying that, in fact, we don’t really know very much about the tangible qualities in face-to-face so it is hard to compare to the even less that we know about the tangible qualities of the virtual. We have a lot of research on sight, considerably less on hearing, drastically less on taste (and most of it around gastronomy and food and wine culture), even more substantially less on smell (and most of it around fragrance), and a really impoverished amount of research on touch.   So before we can make generalizations about the real versus the virtual, we need to ask ourselves why we are so afraid of touch in the real world.  We know from comparative studies of babies and nurturance, for example, that American parents talk to their newborns more than any other parents studied—and touch them less.  Isn’t that interesting to think about the cultural constraints and imperatives of touch in the real world— before we generalize about the impoverished, lonely life of the virtual?  Maybe given our Puritanical past, our aversion to touch in infant nurturing, we actually get more emotional strength from the virtual in our culture. I don’t know if that is true or not.  But the fact is, and it is a fact,  no one does, because we do not have a sufficient baseline of research in real worlds to generalize about whether virtual worlds are better or not.


Here’s another example: When we think about those very tender, touching studies of how much the elderly and infirm in nursing homes thrive when they are on line, we have to also think about our idea of touching those who are sick, ailing, the injured (esp those who suffer extreme facial disfigurement) or very old, the dying.  We don’t do a very good job of it as a society. Interviewing nurse practitioners I learned of how sad it sometimes makes patients when their loved ones come and don’t touch them anymore, as if fearing contamination or contagion.  It registers on a deep, emotional level as rejection. Maybe virtual interactions are less alienating than life in the nursing home, deprived of emotion and touch by those who are not necessarily good at expressing, in gesture or in words, their own fear of the unknown, the beyond, of illness and death. In Netherlands and in Australia and in the US too, there have been experiments teaming shut-ins with drop-outs via chat rooms and virtual spaces, with huge benefits to both.


(2) Why do I argue that, in certain circumstances, distraction can be beneficial?   One person in Howard’s class brought up the issue of distraction and disruption. I make the point in Now You See It that, unless we are disrupted, we often see along very set and static paths. In the famous Neisser (cum Simons/Chabris) attention blindness experiment where you tell people to count the basketballs between those wearing white and they are so busy counting that over half of them miss a 9 second appearance by a woman in a gorilla suit.   The lesson from this experiment isn’t solely that we  do one task well but not two tasks well.   If that were the whole of attention blindness, it would be easy to circumvent it. (And there would be far fewer traffic accidents within five miles of home.) The fact is that, the more we focus on a specific, single, and important task (a directed task that focuses our attention), the more we have to blot out everything else.  So if you tell people, “Hey! Here’s a video. Watch it and tell me about it!”everyone, of course, sees the gorilla saunter in and they can tell you that there were people passing basketballs—but they probably can’t tell you how many times the basketball was tossed, might not be able to tell you how many people were tossing, and definitely won’t know how many times the ball was passed only between people wearing white. They weren’t told to notice those elements and so they didn’t.  Similarly, if charged with counting the passes between those in white, over 50% will miss the gorilla–but if you suddenly interrupt their counting, they see it.


Before the experiment became almost a meme (thereby skewing results), I loved to mess with the set-up in order to see how the results varied in different test situations.  A few times, I had a student in on the experiment and he would wait until about half way through the appearance of the gorilla, about four seconds in, and then would laugh maniacally. Everyone would turn to him and, turning back, would all see the gorilla, even if they hadn’t before.   One time, I supplied external motivation to the counting exercise, with an exhortation something like:  with something like, “Hey, class! The Harvard researchers who did this experiment say no class has ever had a perfect score, with everyone counting the right number of tosses.  I know we can do it. Go, Duke!” Result?  More perfect counts, fewer gorillas.

Attention, in other words, is not just neurological but encompasses the body, emotions, the attitude toward the person giving the test (extremely important in judging test results and the reasons behind success and failure) and cultural values too.   These changed results, depending on how the test is administered, confirms what every insurance adjuster I interviewed for Now You See It  insisted upon: that internal factors (emotional upset like a pink slip or a dear john, physical ones like a headache or mysterious stomach ache) play a far more significant role in our our dangerous distractions than we track or notice.

And that “we” extends to cognitive scientists, as my friend and colleague Dan Ariely, author of the brilliant Predictably Irrational, has found from so many of the very clever experiments he has done that demonstrate how easily we trick ourselves with our own expectations. We cannot say very much with wisdom or knowledge about the comparative cognitive experience (enhanced or diminished) of virtual interactions until we become far more sophisticated in our understanding of the irrationality, distracability, and our own susceptibility to manipulation in the actual world we inhabit, irrationally, predictably, all of the time.


When I talked with Howard’s class, I told the story of going bird watching for the first time with an expert. At the end of a splendid day, I could identify four or five different birds by sight, flight pattern, and even sound. I was so proud! But if you had asked me anything else about the forest that day, I would have been at a loss. And if I had not been learning how to identify birds, I would have noted hundreds of different things about the day because my attention would have been unfocused, luxuriating in a gorgeous day. That’s not a “better” or a “worse.” Some times you want to count basketballs, some times you want to learn about birding, and sometimes you better see that gorilla (in fact, most times you do, in a metaphoric sense) but also some times you want to learn a bit about birding from an expert.


The interesting part is that my friend, the expert bird watcher, saw far more going on in the woods that day than I did precisely because it was easy for him to identify birds so he had lots more attention to devote to lots of other things. That’s how it goes.  It is not just about doing two tasks at once, it is also about the amount, the kind, the novelty, the intensity, and the importance of multiple tasks.


In both of these cases, the issue is not “black” and “white”; it’s not old versus new, it’s not real versus virtual, it’s not monotasking versus multitasking, it’s not attention versus distraction. In all these, it is taking an inventory of the full range of one’s own emotions and experiences, being introspective about the complexity of our mental and emotional and social states as humans, and then thinking about what works for us, what hampers us, what promotes our well being and happiness in the world and what is simply peripheral and insignificant, tiresome or exhausting, distracting or alienating.  There are not hard and fast answers, and most of the nostalgia for a time before digital is extremely naive about what life was then–or now.


This is one reason why I not only read scientific lab-based experiments in researching Now You See it  but also interviewed insurance adjusters, magicians, advertising directors, voice-over actors in prescription drug tv commercials, and others with first-hand knowledge (or skill at) at the fine art of distraction, not in lab settings but in real worlds. Our metrics are often rooted in a very simplistic paradigm of the human. Our experiments are far too often based on models that work in the lab but have very little real world application.


Fact is, having read in cognitive neuroscience for over a decade, I am astonished at how good and complex some of it is–and how mindless, inhuman, silly, stereotypical, shallow, and, basically, un-knowledgable (especially about cultural variation and complexity) some brain research is. My pal @ibogost teases me about being too nice, and he’s probably right about that.  I have an admittedly underdeveloped talent for snark—-so I won’t name names here of some of the really reductive work happening now on cognition and the digital but there is far too much silliness out there that passes as “science. I love great research design and loathe the simplistic studies that have no insight into the variables the experiment itself embeds in the findings.  Sometimes, you’d think there was no human being attached to the brains some of these folks study!


Please, dear scientists, stop making generalizations about human distraction based on flashing lights sequencing across the screen–or, at least, please don’t extrapolate from that static situation to how real humans respond (either in real or virtual worlds). Not every wired prefrontal cortex bursting into illumination on a CT-Scan gives us solid information that helps us decide how and what to teach a child, or how to reform the institutions that frustrate us.  Having been involved with neurological rehabbing on an intensely personal level, having seen brilliant therapists work with mind and body to rehab both, I am humbled by how much we know and how little about the brain’s abilities and its disabilities, how baffled we ultimately are by the complex thing we like to call “the brain.”

So much of the “good old days” versus “bad new days” blather operates from the most static and rigid ideas of human thought, emotion, attention, intelligence, and culture. How dare we reduce ourselves?


Our ideas of the human are far too often limited to our own perception within our own culture, with all the fallacious retrospective that memory always lends us.  We can do better in the world, be more productive beings, not by fearing technologies (old or new) but by understanding what works for us, as individuals and as a society, in a given situation and then working to maximize those potentials. No matter what the pundits say!


Thank you, dear Howard, for allowing me to be a co-learner in your class today.  Thank you, Stanford co-learners, for sharing your ideas and thoughts and words today Thank you for always remembering the complexity when gongs ring and when eyes are closed, in worlds real and otherwise ; It was a total pleasure to meet you all– even if a virtual, mediated Skyped meeting at that.

Leave a Reply

(will not be published)
Cathy N. Davidson

Cathy N. Davidson

Follow Cathy