The New York Times’ Lede blog has an interesting roundup of events in Britain, apparently sparked by a speech in the House of Lords this week by a baroness/neuroscientist. Lady Susan Greenfield apparently feels that spending too much time on Facebook, Bebo and Twitter is “infantilising” childrens’ brains. She goes on to suggest links between ADD and video gaming, among other dire consequences of being plugged in.
Here’s a taste of the Guardian article linked above:
She told the House of Lords that children’s experiences on social networking sites “are devoid of cohesive narrative and long-term significance. As a consequence, the mid-21st century mind might almost be infantilised, characterised by short attention spans, sensationalism, inability to empathise and a shaky sense of identity”.
Arguing that social network sites are putting attention span in jeopardy, she said: “If the young brain is exposed from the outset to a world of fast action and reaction, of instant new screen images flashing up with the press of a key, such rapid interchange might accustom the brain to operate over such timescales. Perhaps when in the real world such responses are not immediately forthcoming, we will see such behaviours and call them attention-deficit disorder.
That’s nothing that we haven’t already heard in the U.S., but her comments are getting some attention in Britain at least partly because Lady Greenfield is a neuroscientist. But here’s the problem with her argument:
Part of what young people are doing on social-networking sites, blogs, Twitter, etc., is creating a sense of identity. The thing that freaks old people out about the Internet is that anyone can be anything online. Anonymity, while it can be scary, is also empowering — consider the going-to-college experience, satirized brilliantly by the Onion, that many of us engage in. Sure, kids manage their Facebook profiles and tweets to fit in with their group, but I don’t see how that’s different, developmentally, than managing your real-life peer group at school, or the outfits you wear, or the music you aggressively blast out of your car, etc.
Now, let’s be clear: There is some real research that says that being exposed to rapidly changing images changes our expectations of what we’ll see in the future. MTV is a good example — back when it used to play videos, the network had an effect on our perception of videos such that directors started using faster and faster cuts. Whether that represents a developmental change in the brain or a change in expectations, I have no idea. But I’d guess it’s an expectations change.
Moving on to another segment of the Guardian story (I bolded one phrase for later analysis):
She also warned against “a much more marked preference for the here-and-now, where the immediacy of an experience trumps any regard for the consequences. After all, whenever you play a computer game, you can always just play it again; everything you do is reversible. The emphasis is on the thrill of the moment, the buzz of rescuing the princess in the game. No care is given for the princess herself, for the content or for any long-term significance, because there is none. This type of activity, a disregard for consequence, can be compared with the thrill of compulsive gambling or compulsive eating.
“The sheer compulsion of reliable and almost immediate reward is being linked to similar chemical systems in the brain that may also play a part in drug addiction. So we should not underestimate the ‘pleasure’ of interacting with a screen when we puzzle over why it seems so appealing to young people.”
Greenfield also warned there was a risk of loss of empathy as children read novels less. “Unlike the game to rescue the princess, where the goal is to feel rewarded, the aim of reading a book is, after all, to find out more about the princess herself.”
Now, I’ll be the first to admit that playing computer games gives me a sense of pleasure almost like a high. Otherwise, why would we play them? (Also, I’ve always understood that drugs were addictive precisely because they engage our pleasure centers. And is that high any different, neurochemically, from the one we get from reading a book or hitting a ball or climbing a mountain? I’m not sure it is.)
But I’d vigorously challenge her contention that playing games doesn’t engage us in a narrative. Although there are plenty of games (Pac-Man, solitaire, Tetris, flash games, etc.) that are pretty much devoid of content by their nature, there are plenty of other games that have a strong sense of narrative which is integral to the gameplay — including the Mario series where you rescue the princess. Sure, sometimes the narrative is ham-handed and propped up mainly with “interesting” costumes.
But anyone who’s played the Zelda series, or Resident Evil, or KOTOR, or GTA, or even (God forbid) Halo knows that the narrative is part of the game. Metal Gear Solid has always felt like a movie, to the extent that MGS4 was rumored to have 90-minute cutscenes. (The fact that this was plausible tells you something about the strength of narrative in games).
And let’s not dwell too long on her contention that the point of reading a book is to find out more about the princess herself. Sure, there are some books for which that’s true. There are other books that are more, uh, escapist.
The point is, let’s not decide that the media kids consume has some final defining effect on their futures. If that were true, all we’d need to do is ship underachieving kids en masse to the ballet. Poverty solved!
And not to pile on further, but I’ll let you take a look at Lady Greenwood and decide for yourself.
(As an aside, I’ll say that I always hated newspapers/magazines that make a point of having headlines asking SCARY RHETORICAL QUESTIONS??? when the answer is no. It’s just cheap.)
(As a second aside, that’s a pretty good photo illustration from the Guardian. Beats the heck out of the stereotypical online illos we see from a lot of U.S. newspapers.)