Friday, September 13, 2013

Free Speech, Pervasive Harassment and a Stopgap Effort

The Internet can be a very unkind place; more specifically, it can be home to some very unkind people. If you are a woman speaking up on about feminism, for instance, harassment and taunting can quickly become a terrible norm. There are entire communities (I will not link to them, even with nofollow) that seem to be dedicated to the use of harassment, taunting, stalking and spamming to silence people on the internet that speak up for themselves and those they care about.  As bad as all that is, it's even worse if you openly identify as trans* or an ally.

The technology that delivers such harassment, however, can also be used to try and recover some peace. Any worthwhile social media network in existence today allows users to block anyone they wish without giving cause. Importantly, no one needs to justify to others whom they wish to separate themselves from. We get to set our own boundaries and shape our own social environments. That said, the sheer volume of harassment directed at some users can be daunting enough that manually blocking individual harassers becomes onerous and triggering. Thus, tools like the Atheism+ Block Bot for Twitter are used to help curate lists of repeat harassers so that users looking to make a quieter and more welcoming social environment for themselves have the tools to do so.

Of course, the Block Bot is rather unpopular in certain crowds, to say the least. It is derided as everything from an expression of groupthink to a violation of Twitter's terms of service. One truly disgusting aspect of many objections is how the concept of free speech is invoked to oppose any attempt to filter out or curtail the flow of such harassment online. There is practically a pocket industry to misunderstanding what freedom of speech means when it comes to harassment, discrimination and bigotry (most recently, I would categorically object to Mathew Ingram's characterization of the response to Pax Dickinson's hateful Twitter habits), so let me be very clear on this point: speech has consequences. Freedom of speech does not mean that anyone else is obligated to listen to what you say, to condone it, to broadcast it on your behalf or (as is relevant in Dickinson's case) to continue employing you when your speech compromises your ability to do your bloody job. In particular, other people have the right to talk about you when you use your speech to harass, intimidate, annoy and otherwise bully another person. If what they decide to do is to warn others that you may not be worth spending time and emotional effort interacting with, then that is a consequence of your speech that must be accepted in order for freedom of speech to have any useful meaning whatsoever.

All that aside, there has been (and I stress that this is my opinion, informed in part by my own privileges and biases) a single objection to the Block Bot that rings true: that the nature of the block lists offered by the Block Bot so far is not transparent. It is not easy, for instance, to determine which admins added which users to the block lists and why. Rather than use this objection as an excuse to remove from people their ability to quickly filter their social environment, however, I wish to address the objection at a structural level. In particular, I would like to build a blocking tool that does not in any way build in my own values or views, but is a tool that can be used in a transparent way by me, by my friends, by my detractors or anyone else that desires to.

That is, I want the process of building, disseminating and applying blocklists to be decentralized. I do not want to be the arbiter of anyone's social media environments, nor do I want anyone else to be. Rather, I want people to have access to the tools they need to usefully employ social media networks without the constant threat of harassment and bullying.

Thankfully, the web is already decentralized in precisely this way. A blocking tool that simply fetches lists from various websites is thus abstracted away from the particular motivations of those running such websites. Moreover, transparency can be achieved by using standard tools common in the open source movement, such as Git, which track the history and contributions to a given file in a robust and decentralized manner. Thus, using this approach, one would pick blocklists curated by users that they trust, each hosted in Git repositories that track metadata and history in a rich and reusable manner.

With all due respect, then, to the authors and maintainers of The Block Bot, I have taken my own stab at the problem based on these principles. I have developed a userscript (that is, a small JavaScript library that runs inside your browser and modifies websites that you view on the fly) that modifies Twitter to block users based on one or more killfiles of your choice. This userscript is itself open source (under the AGPLv3, one of the strongest copyleft licenses available), such that it can be modified, reused and repurposed without my approval or even my awareness.

Instructions on how to use this userscript are available at the GitHub repository where it is hosted. If you are interested, please use it and let me know what you think and how it can be improved.  With your help, I hope we can take the next step towards making social media a tool we can all use safely and productively. This isn't the end of the story, and we must continue to work for making a better Internet, but I think it is, at the least, a useful stopgap that can help mitigate the worst of what many users currently have to go through in order to participate.

No comments: