I try not to just pass things along most days, but I found the post “The rhetoric of atheist bus ads” on Edu*Rhetor‘s blog to be quite intriguing. Is it hate speech?
Check it out. Decide for yourself.
I try not to just pass things along most days, but I found the post “The rhetoric of atheist bus ads” on Edu*Rhetor‘s blog to be quite intriguing. Is it hate speech?
Check it out. Decide for yourself.
This morning’s New York Times Magazine contains a fascinating look at “Google’s Gatekeepers”. Beginning with the case of Turkey’s insistence on a censored version of YouTube (ThemTube? UsTube? Some-of-YouTube?), law professor Jeffrey Rosen explores the limits of free speech in a web/world dominated by major capitalist corporations as (or more) invested in their own power than in the voices of “the people”:
“Today the Web might seem like a free-speech panacea: it has given anyone with Internet access the potential to reach a global audience. But though technology enthusiasts often celebrate the raucous explosion of Web speech, there is less focus on how the Internet is actually regulated, and by whom. As more and more speech migrates online, to blogs and social-networking sites and the like, the ultimate power to decide who has an opportunity to be heard, and what we may say, lies increasingly with Internet service providers, search engines and other Internet companies…”
In general, the article raises (kindly without pretending to resolve) important questions about the various versions of “free” speech, the limitations of the Internet as “public” sphere, the tensions among open access and accountability, data control and world domination, and (duh duh duh) the Future. Good stuff for a rainy Sunday.
The real meat of the matter is the issue of free speech in the Internet age, what counts as publicly acceptable or exceptional to a World Wide audience. Of course, Google and its subsidiaries have a policy of removing only porn, graphic violence, and hate speech — but in the reality of the virtual world, these already subjective determinations become even fuzzier. As Rosen points out, the international market mandates specific restrictions based on individual countries’ laws, and so Google has often had to filter content for specific contexts. For example, Germany and France have laws against Holocaust denial, so search engines cannot display sites devoted to such denial. To some degree, that seems reasonable and responsible… until you consider that those denials are merely submerged, not subverted, but their silencing. Moreover, as Rosen argues (I like this guy), “one person’s principled political protest is another person’s hate speech”; he illustrates this tension through demands by Joe Lieberman (this guy bugs me) that Google remove videos he judged to be “jihadist,” a concept on which I’m not sure his views are, well, balanced. Ah yes, best to just sweep pesky protesters under the rug.
These examples brings up the old question of whether silencing haters only lets them hate in silence or private — rather than exposing their hatred to the light of day and others’ responses that might challenge or even (optimistically) change those attitudes. I just had this discussion with one of my students: While it’s certainly important to “protect the innocent” from hate speech, does that offer true protection or a false sense of security? What are the dangers, for all sides, of denial? And can we ever really hope to negotiate oppositional viewpoints, let alone overcome them, without, well, engaging them in conversation?
(And how can we learn to ask such questions without feeling–or fearing to be dismissed as–idealistic and naive?!)