In this week’s CaSMa blog entry I’d like to share with you my experience of a recent student workshop session that focused on perspectives and theories of privacy, designed, delivered and kindly approved for discussion in this post by Dr Dario Landa-Silva, Associate Professor in the University of Nottingham’s School of Computer Science.
First though, allow me to provide you with a little context. Since the start of October this year, I’ve been part of a small team of PhD candidates assisting Dr Landa-Silva with the workshop segment of the ‘Computers in the World’ module, or, as I have often found myself describing it to curious colleagues pondering the Chris-shaped absence at Horizon HQ on a Thursday morning: ‘Computers in the Wild’. And yes, I know; my seemingly incurable bout of Spoonerism seems extremely convenient for the purpose of this blog post, but I assure you, Dear Reader, that the truth has been only moderately embellished and, as it happens, the wilderness is perhaps not too far removed as a metaphor for the topic at the heart of the module in any case.
Indeed, for a researcher of human behaviour in the digital economy the ‘Computers in the World’ course feels exciting and impressively ambitious in its scope: encouraging final year undergraduates and taught postgraduates to engage with wider issues relating to computer-based systems ranging from legal liability, data protection and the cultural impact of computing, to ethics, professionalism and even the portrayal of computing through popular media and fiction. And believe me when I tell you that my Science Fiction fanboy senses are seriously tingling for when this latter session comes around. In any case, I believe I speak for the CaSMa team when I say that these topics are not simply interesting but represent crucial issues for researchers to understand and consider, whether based in Computer Science or any other school within the academy.
So, what was especially interesting about last week’s workshop session from a CaSMa perspective was that it represented a fantastic opportunity to try and gauge the level to which a bright, culturally diverse group of students might be thinking about the concepts of privacy and surveillance in the digital society. In particular, the workshop attendees were posed with the following scenario (albeit substantially abbreviated for the purpose of this blog post):
Professor Hernandez teaches persuasive writing at her University. On her module, she requires students to write an essay for assessment defending something they actually oppose, as a way of helping them to distinguish between ‘good’ and logically sound argumentation. One student, Michael, takes this requirement on board and produces an excellent paper titled “In Defence of Terrorism…”, producing the argument that all modern democracies were essentially founded by terrorists, and in particular compares George Washington to Osama Bin Laden.
Prof Hernandez receives all submissions, including Michael’s, electronically and uploads them along with her positive comments to a web-based plagiarism detecting system, where the identifying information provided is Michael’s University signifier (e.g. lpxmm), the module title and his year of study. Meanwhile, Professor Bobson, teaching at a separate University, comes across the paper in the system on the basis that it references a number of similar sources to one of his own student’s submissions. He is so outraged that he posts the full text of the paper in addition to all identifying information to his blog, which subsequently gains the attention of the wider media and triggers a fervent social media campaign to uncover the true identities of the author, Professor and institution.
At this point, I would encourage you to imagine that you were an attendee of this workshop. Having read this through yourself, who do you believe was to blame for the consequences described in the scenario? Professor Hernandez? The student, Michael? Professor Bobson? The press? The amateur sleuths upon social media? Perhaps all of the above, or maybe even no-one at all?
As I carefully slalomed the tables of the packed lecture hall, it became readily apparent that for ‘my’ groups, views differed quite significantly. By way of illustration, after applying both the ‘nothing to hide’ and ‘right to be let alone’ perspectives on privacy, one group were fervent in their allegiance to the latter and their belief that Professor Bobson, the press and the baying public were all chiefly to blame for succumbing to the lure of sensationalism and defiantly ignoring the context in which the essay was composed.
For a different group of students, it was felt that Professor Hernandez might have exercised greater diligence in ensuring that the system could safeguard her student’s privacy. For almost all, Michael appeared the ‘wronged’ innocent party, though perhaps most interestingly, little blame was attributed to the system itself despite many of the groups identifying how the aggregation of seemingly innocuous personal information seemed to sufficient to breach the student’s privacy.
Though impossible to accurately reflect the breadth and verve of the debates that took place in the workshop, I have to admit that I went away thoroughly impressed with how the students upon the module had applied their understanding of complex issues relating to privacy and surveillance to evaluate the scenario provided. A common thread throughout all of the groups that I personally encountered (and I should stress that this represented only around 25 percent of the students who attended the session) appeared to relate to difficulties in accurately interpreting context online, striking a balance between individual rights to privacy and societal protection and security through surveillance, and some of the flaws in the ‘nothing to hide’ interpretation of privacy.
As we will no doubt elaborate upon in future blog posts, underpinning our work in CaSMa is a core belief in placing people at the centre of human data upon social media, and seeking to find ways in which personal control can be re-established in research that draws upon the digital data of citizens. Perhaps what I took away most from the students I encountered in last week’s workshop was the sense that we should feel reassured and confident in the knowledge and understanding of these future researchers to tame what often feels like social media research in the wild. Hey, welcome to the jungle.