July 19, 2016 Leave a comment
Due to some drama that was birthed on Twitter and quickly grew legs, I deleted my Twitter account in early June 2016. I had a few reasons for this: it was making me more unhappy than happy, partly because almost every new tweet contained references to topics that upset me. I also viewed it as an experiment in addiction, to see if I would go through withdrawals of some sort. Sure enough, after blocking Twitter domains in my hosts file, I found I would be going to it multiple times a day, every time when onscreen activity had reached a lull and I wanted to avoid the attention deficit. In the same way that going on a trip to another culture might teach you about your own, when I cautiously resumed Twitter again about a week ago, I found that it was nearly unusable.
Before exploring internet chats more extensively, I think it’s worth taking a step back and asking some fundamental questions about human nature. I’m reminded of a speech Eleanor Saitta gave at OHM in 2013, looking at how worldviews may differ if you view people as inherently good versus inherently bad. If we are inherently good, then perhaps there is some trauma that happens to us (coming of age, perhaps?) which chips away at the optimism over the years; if we are inherently bad, then maybe life experience teaches us over time to empathize with others. Is that the lesson William Golding was trying to teach us in Lord of the Flies? But it seems that lesson is countered, at least to a degree, by the results of the Harlow Monkey experiment.
Clearly we are social creatures to a degree, otherwise solitary confinement would not be considered torture. However, confinement can take many forms, and I’d argue that one of them is the veneer of “free speech” when the social norms are quite the contrary. There is an ongoing debate in the US right now about just how free “free speech” can really be, which seems to be complemented by a very angry movement to throw out every tradition in the book. Perhaps the best question to raise that capstones all of this: “At what point does a word become more than just a word?” Then we begin to add more questions: who decides where this point is? What gives them the authority to decide this? What if I disagree with them, and want to set my own point? And what if I fundamentally disagree with such a loaded phrase, calling into question the entire premise– that words will always remain just that, words?
That last point would be quickly defended by the late Frank Zappa, who stressed the point on CNN Crossfire when he was attacked during the PMRC discussions of the late 1980s. It’s also defended by the US Supreme Court in Watts vs United States (1969), where they ruled that “crude political hyperbole… did not constitute a knowing and willful threat.” That said, it is very interested to see what words are explicitly included in the “Seven words you can’t say on TV“, and which words are left out– does this mean they are so vile that we should just “know,” without being told, that they are unacceptable?1
I think it’s now clear that words are just that, words. However, I readily concede that words within context, especially when that context is directed at someone, can summon rather potent emotions from their target. This is after all, why we often use titles and honorifics: “Sir,” “Madam,” “Your Holiness,” and so on. These words convey an intent of respect. They are the verbal version of why we generally wear our Sunday best when meeting the President. It logically follows that words can also convey other intents, sometimes for malicious reasons. An online search for the phrase “hate speech” demonstrates this fairly quickly.
And here is where the challenge comes in, especially in Western cultures where we try to respect the concept of individual sovereignty, freedom to choose, and so on. In addition to the explicit right to “free speech,” we often also expect a right of “free listen”– that is, and especially in our own home, we want to be able to choose what words we read and hear, and what words we block out. In our home, this is a personal choice we do not need to defend or justify to anyone. However, the more public the conversation becomes, the more others may demand that participation in the public context requires waving these choices.
But perhaps “public” is the wrong word to use here, because although it shares roots with words like “republic” and “publish”, I would argue that there is no such thing as a general, shared public. After all, what does it mean to make something public? Perhaps you’re suggesting it is “available” to anyone, but a FOIA document is also technically available to anyone willing to wait the required months or years. Maybe you mean something related to accountability, but its unclear what that means without specifying to whom they are accountable. Or perhaps we just use “public” as a catchall word for a context that we believe many people have access to participate in.
Imagine for a moment that you are a citizen in a small town in America. You were born there, grew up there, and know everyone well enough that when you wave at them, they wave back. In addition to the written rules, there are a lot of unwritten rules that, as a community, you all mutually understand. One might assert that the collection of individuals, the greater-than-sum community, and all these unwritten rules comprise a “public.” Then we can extend this and say that something which acts in the interest of the individuals, the community, and the unwritten rules, comprises a “public good.”
Now, it turns out that there is a small town in another state, which resembles yours in many ways, but there are some key differences. Maybe they say “coke” instead of “pop,” like dogs instead of cats, or maybe they drive on the left side of the road instead of the right. The differences might seem silly, but they go against your native grain, and conjure very uncomfortable feelings. They force you to ask questions about things you want to consider stable. As a community, you might even say they go against the “public good.” Anyone who has studied Psychology 101 can see this quickly (and unintentionally) invoking the fight or flight reflex, and leading to less than desirable outcomes.
In a sort of Scarlet Letter sense, we often take our emotions and apply it to the person who says “coke” or the person who likes dogs. Instead of being an individual who has a trait, hobby, or taste we disagree with, they become “the dog man” or “the coke drinker.” Often this is harmless, and the man you know who works at the bookstore becomes “the bookstore guy.” But when we take this approach with a negative emotion, it sometimes invokes the fight or flight reflect. Do they become the outcast like the Sarah Woodruff, or do we decide they are a sort of Grendel who bears the mark of the Beast?
Now rewinding back to our origin, let’s revisit how we handle things on Twitter, on Facebook, and other websites we call “social media.” Rather than seeing the individual or the community, we see a post, often removed from a greater context. And in particular, we see a symbol– a phrase, a word, a picture, an implication. More to the point, because we are creatures based on pattern recognition, when we have such a symbol that is missing context, we strive to fill that void and create a new context for it. And because that symbol may invoke some emotions within us, we use those emotions to create the context. This, I would argue, is how, like cattle, we brand the usernames attached to those symbols.
At least in Western Culture, our society is based on a fight of good versus evil. We see traces of this fight in works like The Iliad, or in many places through history. Further, one might argue that from the blank slate, you become good by defeating evil; otherwise, if you are inherently good, you maintain being good by defeating evil. And if we’re taught these lessons from a young age, in our small (or large) towns, it logically follows that it might provide a motivation to action when we see things on Twitter/etc which we identify as evil. But there is one key problem with this approach.
Notice that while The Iliad text begins with the Rage of Achilles, the story itself begins far before the text begins. In fact, one could suggest that the Trojan War is a response to a previous battle, and the good and bad sides are determined by who writes the story. This lesson has been lost in the modern age, where we are flooded with stories of happy endings, where the demon has been defeated, and we forget that after Beowulf defeated Grendel, he found this was only the first battle.
So what really happens when we and our group of “twitter friends” join a mob and “take down” someone whose words we view as “racist”, or “sexist”, or anything else? And what is the end goal? I would argue that we are battling against the symbol, forget that there is someone on the other side, and in a race to defeat the monster (neglecting they might have friends), we ourselves become monsters. It creates a toxic, polarizing environment in which Might Makes Right, and whoever is the hero of the day gets to define what that Right will be. It necessarily leads to further conflicts, and more important, it can ruin lives. And yet, none of this is considered when someone is pounced upon because someone on Twitter misinterpreted a word they used.
In the end, these are simply observations. I don’t mean to justify one thing or denounce another. Further, I don’t think this is anything new. I do offer that “social media” has helped both accelarate these nasty reflexes in people, as well as retarding the lessons people would normally learn when making mistakes and seeking atonement. It does mean, though, that I will be taking a far more catious approach to anything represented in 140 characters or less.
1It’s also worth noting that when I touch on a very “heated” topic, I must increase the number of sources I explicitly cite, as if using them as a shield, or deflecting anticipated anger. It’s an appeal to a greater authority if I ever saw one 😉