Friday 23 December 2016

Identifying popular tourism attractions in London by using geo-tagged photos from Flickr

Dr. Yeran Sun is a postdoctoral researcher at Urban Big Data Centre, University of Glasgow, UK. His research interests include big data and urban studies, social media research and sentiment analysis, transport and social inequality, transport and public health.

Social media data offers crowd-sourced data to social science research. In particular, GPS enable-devices, such as smart phones, allow social media users to share their real-time locations in social media platforms.

In my presentation, Flickr geo-tagged photos are used to identify popular tour attractions in London.

‘Geo-tagged’ photos and tweets of Flickr, Instagram and Twitter users tell us the footprints and mobility of users. Compared to Instagram and Twitter, Flickr has a large portion of tourists. Geo-tagged photos from Flickr users are used as crowed-sourced data in recent tourism research. However, the population of geo-tagged photos are not proportional to the population of real tourists’ footprints. Therefore, visits to popular tourism attractions such as landmarks are likely to be over-represented by Flickr photos, while visits to unpopular tourism attractions are likely to be under-represented.

Although geo-tagged photos are biased, they could be used to reflect popularities of tourism attractions that have no ticketing records, such as central squares, public statues, public parks, rivers, mountains, bridges and so forth. Crowd-sourced data from Flickr photos can be used to measure popularities of tourism attractions without ticketing records. As clusters of photos tend to take place around popular tour attractions where tourists like to take photos, we could identify popular attractions by detecting significant spatial clusters of geo-tagged photos.

In my presentation, significant clusters are detected by using a density-based clustering method called DBSCAN.  Most of those clusters spatially overlap popular tour attractions in London. In my presentation, free-to-use tools QGIS and R are used to map geo-tagged photos and carry out cluster detection respectively. Additionally, to run the DBSCAN algorithm we need to install a package ‘dbscan’ in R. Via Flickr APIs (https://www.flickr.com/services/api/), we can download public Flickr data including photos, tags and coordinates by defining geographic boundaries or searching for keywords.  There are API kits written in a variety of languages, including C, Delphi, Java, Python, PHP, .NET, Ruby and so on. You might also use shared Flickr data for your research. Yahoo Research share Flickr data with researchers (https://research.yahoo.com). Shared datasets can be found here (https://webscope.sandbox.yahoo.com/catalog.php?datatype=i). 

Wednesday 14 December 2016

Facebook as a Tool for Research

Gill Mooney is a doctoral researcher, studying at the University of Leeds. Her research interests are currently focused on social class and social media. She completed her undergraduate degree as a mature student in sociology at the University of Hull, and prior to this was employed as a project co-ordinator for a young people’s sexual health charity in Hull. @gillmooney

My research is concerned with the ways in which we know, understand and produce social class in the digital environment of the social networking site (SNS), Facebook. The research will provide valuable insights into how social networking is changing the ways we may relate to one another both online and offline, as well as the effect it might be having on broader understandings of social class.

Facebook is the topic of the research, the site in which parts of it take place, and a tool for facilitating its logistics and practicalities. I am using it for recruitment, communication with research participants, and using content collected from Facebook as stimulus for discussion in focus groups and interviews. This combination of online and offline methods and approaches requires reflexivity to run smoothly, but maintaining a link between online and offline is essential for providing data that represents the relationship between those two spheres in terms of how individuals perceive and produce social class, and the broader effects that may have.

Recruitment
I specified Facebook as the means through which I would recruit participants, partly because I would know that they were definitely likely to be regular Facebook users, and have a reasonable understanding of how the platform functions, but also because I want to keep as much of the research as possible within the psychic environment of Facebook, to help participants stay focused on discussing things that happen there, and keeping the research framed within Facebook. I began by asking members of a general interest Facebook group of which I’m a member to share the call for participants on their own accounts. There are considerable ethical implications in using Facebook in this manner, especially when using my personal account for recruitment. It could result in a pool of participants who are connected to me personally in some direct or indirect way, which has the potential to compromise the integrity of the research or cause tension in my personal relationships. Precautions were put in place to avoid these kinds of conflict, mainly through checking possible connections to potential participants.

Communication
I set up a Facebook account in the name of the research, specifically for the purpose of handling communication and logistics with participants, again as part of wishing to keep all elements of the research within Facebook as far as possible. Participants add the account as a friend, and then I can use the messages tool to stay in touch with participants, arrange focus group and interview sessions, and send them links to consent forms and other information.
This has proven to be an effective means of staying in touch, and it means I can provide information quickly and easily in a medium that is both convenient for the participants and within the environment that I’m researching.

Stimulus for discussion
There was some concern that during the focus group sessions it would be easy for the discussion to deviate away from Facebook, and that it might be difficult to even begin talking about it in a face-to-face encounter, with others. In response I devised a ‘dummy’ Facebook newsfeed page as a way to stimulate discussion, and maintain focus on Facebook. By using this page, I can guide discussion by referring to it and asking the groups to comment on different elements within it, framing my questions around it to stay on track. Class is a difficult topic to discuss, everyone understands it differently and has had different experiences of it, so rather directly addressing it, I am able to talk about self-representation more generally in terms of Facebook and explore how class shows itself there. The content for this dummy page comes from the pages of people in my own friendship network who volunteered, and is subject to a very rigorous consent and anonymisation process.

For the interviews, participants’ own shared content is used. They provide consent for me to select some items they have shared, and then it’s used as a means to stimulate discussion, serving a similar purpose as the dummy page.

Conclusion
Using Facebook as a tool for research requires significant planning and reflexivity throughout the whole research process, but can offer benefits in terms of having access to large networks of individuals for recruitment purposes, as well as an easy and convenient way to stay in touch with participants. The difficulties in planning are related to the considerable ethical implications of using content shared by participants, and ensuring informed consent is in place at all times.


Facebook is a crucial site for research that seeks to understand contemporary society, as its use grows and it becomes further embedded in the lives of its users. Developing well thought out approaches to this kind of work is essential for maximising the research potential of the platform, and for making sure that research is carried out with integrity.

Friday 2 December 2016

On Social Media Analytics

Phillip Brooker is a research associate at the University of Bath (UK) working in social media analytics, with a particular interest in the exploration of research methodologies to support the emerging field. Phillip is a member of the Chorus team; a Twitter data collection and visualisation suite (www.chorusanalytics.co.uk). He currently works on CuRAtOR (Challenging online feaR And OtheRing), an interdisciplinary project focusing on how “cultures of fear” are propagated through online “othering”. @Chorus_Team

NSMNSS events have always been good value for me. I haven't quite been a part of the network since it kicked off, but I certainly have tried to be an 'active member' for the years that I have been involved with it. So when Curtis Jessop emailed me to ask if I'd give a talk on the practicalities of using Chorus to do social media analytics research, I jumped on it. Moreso than telling people about our software and what we've used it to do, these events are always the perfect chance to hear about innovative current research in the field. I won't go through my talk in too much detail here since I generally try not to be too reductive about how Chorus might be used in social research. Best to download it, watch the tutorial video, read the manual and then play about with it yourself (all of which you can do at :::PLUG ALERT!::: www.chorusanalytics.co.uk). Suffice to say that my talk aimed to run through the basic features and functions of Chorus as a free tool for collecting and (visually) analysing Twitter data. This included a demonstration of the two different data collection modes – the more familiar query keyword search which you can use to look for hashtags and so on, as well as our native user following data collection function which lets you capture sets of user’s Twitter timelines. And from there, I ran through the different ways of visualising data within Chorus – in brief, the timeline explorer which provides a variety of metrics (e.g. tweet volume, percentage of tweets with URLs, positive and negative sentiment, novelty and homogeneity of topic) as they change across time, and the cluster explorer which produces a topical map of the entire dataset based on the frequency with which co-occur with one another. The aim here was to show how Chorus might be used by researchers to answer lots of different types of research question, both as a full all-in-one package, but also in a more exploratory way if users want to quickly dig into some data for a pilot study or similar – readers especially interested in what Chorus might offer might find one of our recent methods papers useful (available at: http://bds.sagepub.com/content/3/2/2053951716658060).

However, what I want to comment more pointedly on in this blog is the NSMNSS event itself, because to me it marks something of a turning point in social media analytics, where it's finally becoming very clear just how distinctive we've made (and are continuing to make) the field. There seems to have always been this worry that working with digital data runs the risk of turning the social sciences into unthinking automata for blindly spotting patterns – the supposed ‘coming crisis of empirical sociology’ referred to by Savage and Burrows in 2007. And that characterisation has not really disappeared, despite social media analysts natural objections to it as a way of representing our work. Thus far, social media analytics has (arguably) necessarily had to progress in a way that directly references those concerns – researchers have made it their explicit business to show, through both conceptual and empirical studies, that there is more to social media data than correlations. However, at this most recent NSMNSS event I got the sense, very subtly, that something different was happening. As a community, we seem to be moving past that initial (and I reiterate, very necessary!) reaction into a second phase where we’re beginning to be more comfortable in our own skin. We’re now no longer encumbered by the idea of social media analytics as “not data science”, and we’re seeing it recognised more widely as a thing in and of itself. As I say, it might seem a subtle distinction, but to me it suggests that finally we’re finding our feet!

Of course, this doesn’t mean we have neatly concluded any of the long-standing or current arguments about the fundamental precepts of the field – my background in ethnomethodology and ordinary language philosophy gives me a lot to say about the recent incorporation of ideas from Science and Technology Studies into social media analytics, for instance. But nonetheless, for me, this event has demonstrated the positive and progressive moves the field seems to be making as a whole. We already knew it of course, but it’s clearer than ever that there are very interesting times ahead for social media analytics!

Wednesday 30 November 2016

Democratising Access to Social Media Data – the Collaborative Online Social Media ObServatory (COSMOS)

Luke Sloan is a Senior Lecturer in Quantitative Methods and Deputy Director of the Social Data Science Lab at the School of Social Sciences, Cardiff University, UK. Luke has worked on a range of projects investigating the use of Twitter data for understanding social phenomena covering topics such as election prediction, tracking (mis)information propagation during food scares and ‘crime-sensing’. His research focuses on the development of demographic proxies for Twitter data to further understand who uses the platform and increase the utility of such data for the social sciences. He sits as an expert member on the Social Media Analytics Review and Information Group (SMARIG) which brings together academics and government agencies. @DrLukeSloan

The vast amount of data generated on social media platforms such as Twitter provide a rich seam of information for social scientist on opinions, attitudes, reactions, interactions, networks and behaviour that was hitherto unreachable through traditional methods of data collection. The naturally-occurring user-generated nature of the data offers different insights to the social world than that collected explicitly for the purposes of research, thus social media data augments our existing methodological toolkit and allows us to tackle new and exciting research problems.

However, to make the most of a new opportunity we need to learn how the tool works. What does Twitter data look like? How is it generated? How do we access it? How can it be visualised? The bottom line is that, because social media data is so different to anything we have encountered before, it’s hard to understand how it can be collated and used.

That’s where COSMOS comes in. The Collaborative Social Media ObServatory (COSMOS) is a free piece of software that has been designed and built by an interdisciplinary team of social and computer scientists. It provides a simple and visual interface through which users can set up their own Twitter data collections based on random samples or key words and plot this data in maps, as networks or through other visual representations such as word clouds and frequency graphs. COSMOS allows you to play with the data, selecting subsets (such as male and female users) and seeing how they differ in their use of language, sentiment or network interactions. It directly interrogates the ONSAPI and draws in key areas statistics from the 2011 Census, allowing you to investigate the relationship between, for example, population characteristics (Census) and anti-immigrant sentiment by locale (Twitter). Any social media data collected through COSMOS can then be exported in a variety of formats for further analysis in other packages such as SPSS, STATA, R and Gephi.

COSMOS is free to anybody working in academia, government or the third sector – simply go to www.socialdatalab.net and click on the ‘Software’ tab on the top menu bar to request access and view our tutorial videos.


Give it a go and see what you can discover!

Monday 28 November 2016

Introduction to NodeXL

Wasim Ahmed, from the University of Sheffield, is a PhD researcher in the Information School, and Research Associate at the Management School. Wasim is also a social media consultant, a part of Connected Action Consulting, and has advised security research teams, crisis communication intuitions, and companies ranked within the top 100 on the Fortune Global 500 list. Wasim often speaks at social media events, and is a regular contributor to the London School of Economics and Political Sciences (LSE) Impact blog. @was3210

This blog post is based on a conference with the same name which was delivered at the Introduction to Tools for Social Media Research conference. The slides for the talk can be found here. This blog post introduces and outlines some of the features of NodeXL.
Network Overview, Discovery, and Exploration for Excel (NodeXL) is a graph visualization tool which allows the extraction of data from a number of popular social media platforms including Twitter, YouTube, and Facebook with Instagram capabilities in beta. Using NodeXL it is possible to capture data and process it to generate a network graph based on a number of graph layout algorithms.
NodeXL is intended for users with little or no programming experience to perform Social Network Analysis. Social Network Analysis (SNA) is:
 “the process of investigating social structures through the use of network and graph theories” (Otte, Evelien, Rousseau, and Ronald, 2002)
Figure 1 below displays the connections between workers in an office:

Figure 1 – Graph of an example network graph















We can also think of the World Wide Web as a big network where pages are nodes and the links are edges. The Internet is also a network where nodes are computers and edges are physical connections between devices. Figure 2, below, from Smith, Rainie, Shneiderman, & Himelboim, 2014 provides a guide in contrasting patterns within network graphs.
The figure below shows that different topics on social media can have contrasting network patterns. For instance in the polarized crowd discussion one set of users may talk about Donald Trump and other about Hilary Clinton, in the unified crowd users may talk about different aspects of the election, and in brand clusters people may offer an opinion related to the election without being connected to one another and without mentioning each other. In a community cluster a group of users may talk about the different news articles surrounding Hilary Clinton. Broadcast networks are typically found when analysing news accounts as these disseminate news which is retweeted by a large amount of users. We can think of support networks as those accounts which reply to a large number of accounts, we can think of the customer support of a bank which may reply to a large amount of Twitter users

Figure 2 - Six types of network structure diagram




























NodeXL can also generate a number of metrics associated with the graphs such as the most frequently shared URLs, Domains, Hashtags, Words, Word Pairs, Replied-To, Mentioned Users, and most frequent tweeters These metrics are produced overall and also by group of Twitter users. By looking at different metrics associated with different groups (G1, G2, G3 etc) you can see the different topics that users may be talking about.
NodeXL also hosts a graph gallery where users can upload workbooks and network graphs. However, in regards to ethics in an academic context uploading to the graph gallery may not be permitted as participants will be personally identifiable. However, it is possible to use NodeXL to create offline graphs and to report aggregately. 

Thursday 24 November 2016

Critically Engaging with Social Media Research Tools: Select Your Targets Carefully

Dr Steven McDermott lectures on contemporary developments in media and communications with an emphasis on the social understanding and analysis of digital media; social media platforms and the public sphere; the politics and philosophy of digital media; and media and communications research methodologies at the London College of Communication, University of Arts, London. @soci  s.mcdermott@lcc.arts.ac.uk

My presentation given at the SRA and #NSMNSS allowed me to finally meet face-to-face with 7 expert speakers presenting tools for social media research. It was a day of learning for me. An all-day event that left me elated and keen to get on with my research in the knowledge that I would be able to call on the expertise of the others if needed - and of course my door is always open to the other speakers. I highly recommend taking part in similar future events to all.

The talk was titled “Critically Engaging with Social Media Research Tools”; it was about using the tools but with ethical concerns at the forefront of the social researcher’s mind rather than relegating them to a mere paragraph in the methods section. In order to illustrate the fluid nature of the visualisations that the software can co-create I had decided to collect, analyse and visualise Twitter data on the hashtag #BigData. By selecting this hashtag, I was also keen to get behind who or what organisations were promoting the buzz surrounding big social data.

The tools that I introduced; TAGS; Yourtwapperkeeper; DMI-TCAT; Gephi; Leximancer; to collect data from Twitter, YouTube enable the social researcher to take part (in a limited capacity) in surveillance capitalism. Researchers are able to collect big social data from people’s lives without their knowledge or consent. I was keen to highlight the notion that as researchers are in this position of observing others interactions that they have a duty of care to those they are researching. As we do when applying any other research tool.

The answer to the question regarding who or what institution is behind/key influencer/major player/controlling the flow of communication on #BigData was revealed by analysing 1,040,000 #BigData Tweets with Leximancer. On Twitter the key influencer around the term #bigdata is a contractor who supplies staff to the National Security Agency in the United States – Booz Allen Hamilton. Booz Allen Hamilton are the contractors who employed Edward Snowden.

This visualisation was presented with the caveat that the graphs and images being shown are the result of numerous steps and decisions by the researcher guided by certain principles from social network analysis (SNA) and graph theory. What was presented are a few of the techniques and tools of data mining and analytics, with machine learning and automation in Leximancer. Such insights that ‘come’ from the data and the application of algorithms need to be validated in the light of informed understanding of the ‘never raw data’ position. The existence of this ‘data’ is the result of a long chain of requirements, goals and a shift in the wider political economy; surveillance capitalism. The ‘insights’ are at the macro level – devoid of this context.

Big/Social data does not represent what we think they do. It represents something, and this is worth investigating. We are looking at the various ways in which populations are defined, managed, and governed. The modelling algorithms used to visualise the social data know nothing about letters, nothing about narrative form, nothing about people.

The algorithm’s lack of knowledge of semantic meaning, and particularly its lack of knowledge of the social media as a form or genre, lets it point us to a very different model of the social.  Such ‘Reading Machines’ are engaged in datafication of the social. The concern with the notion of datafication is that as it attempts to describe a certain state of affairs, it flattens human experience. It is this flattening by computer aided approaches to research of social media platforms that requires caution and can be ameliorated by the application of ethnographic approaches to collecting social media data from Twitter and other platforms.

A major worry is that designers, developers, and policy makers will continue to take big/social data at face value, as an object or representation of a truth that can be extracted from and that reflects the social.

We are glimpsing the various ways in which we are to be defined, managed, and governed. As social researchers we too engage in defining, managing and governing. The first ethical step when using the tools listed below is to have a carefully formulated research question and to select your targets carefully.


------------------------------------------------------------
What follows is the list of tools referred to during the talk and links to each tool with installation support where provided. Also found here: https://snacda.com
  •      TAGS – available here – https://tags.hawksey.info/ by Martin Hawksey. Contains useful instructions and videos to help setting it up.
I have also created a step by step set-up guide for TAGS V6 – https://1drv.ms/b/s!ApdJKDPeE0fSmgo6z6yDln43Kb7X
The only concern is that Twitter now requires you to not only have a Twitter account but also have installed their app on your phone and provide them with your phone number and verify it. So it’s “Free”! Just provide us with your entire identity and all the data that goes with it!


It has been seriously undermined by changes to Twitters rules and regulations and its creator John O’Brien III seems to have sold it to Hootsuite and left it at that. It may now be in contravention of Twitter’s Terms of Services.


The Digital Methods Initiative Twitter Capture and Analysis Toolset (DMI-TCAT) allows for the retrieval and collection of tweets from Twitter and to analyse them in various ways. Please check https://github.com/digitalmethodsinitiative/dmi-tcat/wiki for further information and installation instructions.
This software is highly recommended – it also has a version that can access Youtube – https://github.com/bernorieder/YouTube-Data-Tools


GEPHI can now be used to collect Twitter data – and operates on Windows and Apple operating systems – just be very careful with java updates and incompatible versions of iOS.


Designed for Information Science, Market Research, Sociological Analysis and Scientific studies, Tropes is a Natural Language Processing and Semantic Classification software that “guarantees” pertinence and quality in Text Analysis.

Leximancer is computer software that conducts quantitative content analysis using a machine learning technique. It learns what the main concepts are in a text and how they relate to each other. It conducts a thematic analysis and a relational (or semantic) analysis of the textual data.

Tuesday 22 November 2016

Westminster Student Blog Series

We have been posting a series of blogs written by University of Westminster Postgraduate students. They are all based on their research of social media, and come with a YouTube video as well. This is the last blog of the series - thank you to all of the students who contributed their work.

Social Media users: The Digital Housewife?
Valerie Kulchikhina (@v_kulchikhina) is a student at the University of Westminster for the Social Media Master's Degree program. She earned her Bachelor's Degree in journalism and advertising after graduating from the Lomonosov Moscow State University.

https://www.youtube.com/watch?v=4SssZatyTyM&feature=youtu.be

New social media platforms are created every few years. For instance, after the success of MySpace in 2003 and Facebook in 2004, came the launch of Twitter in 2006.  In addition, there has been an emergence of image and video-based applications, such as Instagram and Snapchat (released in 2010 and 2011 respectively).
  
The main source of income for these companies is data: basic information about a website’s members; their likes, comments, photographs, videos and sometimes even user-generated content (e.g. YouTube, Pinterest). Consequently, some scholars have regarded this process as the exploitation of users’ labour. This subject has been explored in Digital Labor by Trebor Scholz, in Digital Labor and Karl Marx by Christian Fuchs, as well as in other books and publications.

However, Dr. Kylie Jarrett provides a new critical model for the issue of digital labour exploitation by applying Marxist feminist theorisation. According to Jarrett, there are notable similarities between the exploitation of domestic workers’ labour and online users’ labour. For example, in both instances their work remains unpaid, even though it is integral to the capitalist market. These ideas are presented and explored in her new book, entitled Feminism, Labour and Digital Media: The Digital Housewife. There, Jarrett also addresses a variety of topics, from Marxist works to identity politics.

In order to find out more about Jarrett’s perspectives, we reached out to the author herself and conducted a short but very informative interview. First, we examined the intriguing concept of the ‘digital housewife’ that allowed the author to explore the feminised experience of labour. Second, we discussed how Jarrett came across this concept of ‘feminisation of labour’ for the first time while reading neoliberal economics and politics. That idea later evolved into the term ‘housewifisation’, which the author discovered in the influential works of Maria Mies. Third, we analysed several similarities between domestic labour and online labour that initially captured Jarrett’s attention. For instance, the author notes, ‘they are both providing inalienable commodities that are part of the alienated commodity exchanges’. Moreover, both types of labour participate in developing ‘meaningful subjectivity’. However, Jarrett emphasizes that even while sharing so much in common, they are not the same.

In addition to that, the author explained her opinion on the importance of feminism. She notes how feminist theorisation showed the economic influence of domestic work that previously was simply considered a ‘natural’ labour. Thus, feminist critique helped to demonstrate the valuable role which consumer labour plays in the capitalistic world. 
She also mentions several reasons why the framework for housewives’ unpaid work has not garnered more attention over the years. For example, she reminds us that for a long time domestic work was perceived to be organic labour and, therefore, ‘not productive’.

Furthermore, Jarrett describes orthodox Marxism as ‘incoherent’ towards women, whose work was often discussed in the same context as nature.  Within this framework, it is not surprising that feminist theorisation was not able to gain more visibility for a long time.  
Jarrett also contemplated the possibility of building an online world where user labour is no longer exploited. During this discussion, Jarrett mentions that feminist theorisation shared some models of creating a harmonious medium. However, she highlights that ‘we do need to challenge a lot more than exploitation’.

In her book, Jarrett references numerous scholars, including feminist thinkers and other theorists. For instance, the author addresses the opinions of Mark Andrejevic and Tiziana Terranova, who believes that ‘free labour … is not necessarily exploited labor’. It was interesting to discover Jarrett’s responses to these notions, namely: ‘Yes, you are right but also…’ She uses a simple example of liking someone’s Instagram post to show that it is a social interaction, but also it is an action that is exploited structurally.


In summary, Jarrett manages to successfully utilise the framework of ‘unpaid reproductive work’ and apply it to the current discourse of online labour exploitation. Using different examples and her own personal experience, she makes a seemingly complex topic more accessible to students and scholars alike. Hopefully, readers will find the accompanying video to be an interesting introduction to Jarrett’s recent work. Perhaps it will help to further endorse the significance of feminists’ works in the field of digital media studies. 

 

Friday 11 November 2016

What can social media tell us about society? - Videos & slides avaiable

Thanks to everybody who attended our event at Twitter HQ on Tuesday looking at how social media data can be used to help us understand society. It was a great evening with interesting talks from Joseph Rice, Josh Smith, Callum Staff, Rob Procter, and Dr Aude Bicquelet.

For those of you not able to attend, or follow along on Periscope, check out the links below to look at slides and video from the event

Joe Rice, Twitter - What's possible when you know what the whole world is thinking?
Watch video

Callum Staff, Department for Education - #vom: predicting norovirus using Twitter
Watch video
Download slides

Rob Procter, University of Warwick - Thinking practically about social data science
Watch video
Download slides

Aude Bicquelet, NatCen - Using text mining to analyse YouTube comments on chronic pain
Watch video
Download slides

Josh Smith, Demos - Listening to the digital commons
Watch video
Download slides

Q & A with all speakers
Watch video

Wednesday 9 November 2016

Westminster Student Blog Series

We will be posting a series of blogs written by University of Westminster Postgraduate students. They are all based on their research of social media, and come with a YouTube video as well. We will be posting one a week for the next month, so keep your eyes peeled!

Pandora’s box: The Conflict Between Privacy and Security

Trenton Lee (@trentjlee) is a PhD Researcher at the Communications and Media Research Institute and the Westminster Institute for Advanced Studies at the University of Westminster. His research focuses on the intersection of critical political economy of the internet and identity theory.
 
The Guardian published an address to discuss the “uncomfortable truths” of the Apple vs. FBI court case in the United States where the FBI wanted Apple to aid in a terrorist investigation by developing a “back door” to “circumvent user-set security feature in any given iPhone” (Powles and Chaparro 2016). They argue that companies like Apple, Google and Facebook, who collect and store an exorbitant amount of the population’s information, must earn our trust, which is “predicated on transparency and it demands accountability, not marketing and press releases” (ibid). Christian Fuchs, in his recently published book, Reading Marx in the Information Age: A Media and Communication Studies Perspective on Capital Vol 1, demands this same transparency and accountability. Fuchs states that communication companies only tell one side of the story by, what Marx would say, “fetishizing” use-value (i.e. connectivity, communication) “in order to distract from exchange-value, from the fact that communications companies are out to make lots of money” (2016, p1). Throughout the book, Fuchs engages with the concepts and theories Karl Marx develops in Capital Vol 1, developing Marx’s critique of the political economy into a critique of the political economy of communication, which is useful in the study of the “role and contradictions of information in capitalism” (ibid).
Understanding the role and contradictions of information lies at the centre of the debate surrounding the Apple vs. FBI court case. How is this information collected? Why is it collected? What happens to it? Who decides this? 

This court case is at the centre of two clashing issues - the need for security and the right to privacy, which ignite a crisis of morality. In these times of crisis, people turn to each other to exchange information, experiences, and stories to make sense of the crisis. In the case of Apple vs. FBI, this exchange has developed into a familiar cultural narrative, one that ends in chaos - Pandora’s box. The UN human rights chief, Zeid al Hussein, described the FBI’s actions as an attempt to open Pandora’s Box, the mythological container contain all the worlds evils (Nebehay 2016). It is an interesting allegory for Hussein to compare to this dilemma over the management of the information collected by information companies like Google, information which is produced on a mass scale as a commodity, a ‘peculiar good’ (Fuchs 2016). This information is the stored in and left under the management of information companies like Apple, Google and Facebook, putting these information companies in the role of Pandora, the one who guards the box. However, their close ties to the capitalist mode of production and the concentration of power these companies possess challenges the trust we can place in their hands. We must use Marx and his political economic framework as a means to achieve the desired transparency and accountability that predicates the public’s trust in these information companies. Should we allow these companies to take on the role of Pandora? Will they guard the box that contains all of the world’s evils? Or will they too, fail at the job?

References:
Fuchs, C. (2016). Reading Marx in the Information Age: A media and communication perspective on Capital Vol 1. New York: Routledge.

Nebehay, S. (2016). UN Human Rights Official Warns Against 'Pandora's Box' Precedent In Apple vs. FBI case. Huffington Post, 4 March. Available from 
http://www.huffingtonpost.com/entry/a... [Accessed 20 April 2015].

Powles, J. and Chaparro, E. (2016). In the wake of Apple v FBI, we need to address some uncomfortable truths. The Guardian, 29 March. Available from 
https://www.theguardian.com/technolog... [Accessed 20 April, 2016]

Monday 7 November 2016

What can social media tell us about society? - Event on Periscope

On Tuesday 8th November, NatCen Social Research will be hosting an event as part of the ESRC Festival of Social Science looking at how, in an increasingly digital age, social media research offers new ways of understanding society's attitudes and behaviours.

The event will run from 17.00 to 19.00, featuring presentations from researchers who have used social media for research in a range of settings including government and academia, followed by a panel discussion.

Unfortunately, the event is fully booked, but if you'd like to follow along, the event will be streamed live on Periscope through the @NatCen Twitter account. We're also looking to take questions from the Twitter audience for the panel discussion. If you can't make it, we'll make links to the broadcast available.

Confirmed speakers are:

Joseph Rice, Twitter: What's possible when you know what the whole world is thinking about any topic at any time?
Josh Smith, DEMOS: Listening to the Digital Commons.
Callum Staff, Department for Education: #vom: Predicting Norovirus using Twitter
Rob Procter, University of Warwick: Thinking practically about social data science.
Dr Aude Bicquelet, NatCen Social Research: Using Text-Mining to analyse YouTube Video Comments on Chronic Pain


Wednesday 26 October 2016

‘An introduction to tools for social media research’- an NSMNSS and SRA event

On 11th October NSMNSS and the SRA co-ran an event looking at social media research tools. Speakers came from a range of backgrounds, and discussed mix of qualitative and quantitative methodologies including text, image, network, and geographical analysis. All slides can be found here http://the-sra.org.uk/events/archive/ and presenters will be contributing to a blog series about social research tools, due to be released later this year, so keep your eyes peeled!

Steven McDermott kicked things off by discussing the idea that ‘data is an ideology- a fiction that we can’t take at face value’. In his session Steven not only discussed which tools he used, but urged researchers to critically engage with the information we get from these tools, and what biases they may carry. He concluded that social media data should be used as an ‘indicator’ (rather than a fact) alongside other methods, such as ethnography, in order to get the ‘full picture’.



Next, Wasim Ahmed talked about NodeXL, a free Microsoft Excel plug-in he uses for twitter analytics, but can also be used with Facebook, Instagram and more! The main focus of this session was the graph function of NodeXL, which allows the mapping of networks. The tool also has a graph gallery, which allows users to access historic data stored there. NodeXL is a free tool and very user-friendly according to Wasim, so he recommends downloading and having a look at mapping your own data.

















Moving on to developing tools for social media analysis, Luke Sloan from the COSMOS introduced their analysis tool. Luke started off by saying that the programme was created for researchers who ‘don’t understand technology’ meaning that complex computing language is not required to use it. Like NodeXL, COSMOS is also good at mapping, and can break down tweets by geography, gender and time, as well as identifying popular words and phrases in tweets; particularly useful for content and sentiment analysis.



























Philip Brooker then discussed social media analytics using Chorus. The majority of the session was interactive, with Philip demonstrating how to use Chorus with twitter data. Chorus allows users to retrieve data from twitter by searching for hashtags and phrases. A good element of this tool is that it allows users to continually add data, allowing for longitudinal datasets to be created. It also has a timeline function which can be used to see the frequency of tweets alongside different metrics (again, very useful for sentiment analysis). There is also a cluster explorer function, which allows users to see how different tweets and topics interact with each other. A function which will allow for gathering of demographic information from twitter profiles is also currently being developed.




















There were a couple of sessions on using social media for qualitative analysis; the first from Gillian Mooney was on using Facebook to conduct and recruit for research. Gillian emphasised that Facebook is good for stimulating discussion and debate, but she also identified a few drawbacks in the practical and ethical implications. Recruitment seemed to have been slow moving via Facebook, and Gillian suggested that twitter may be a better way of recruiting participants. She also stated that there are wider ethical implications with Facebook research because it often means that the researcher actively participates with the platform, which blurs the line between the researcher and participant. While this makes ethical research more difficult to conduct, she believes that it makes for more vibrant research. She ended with a call for ethical boards to be more understanding of social media research, and for a clear and consistent research ethics framework across all platforms.




















Sarah Lewthwaite continued with qualitative analysis, by talking about developing inclusive ‘digital toolboxes’ so that research is accessible to all. Sarah stated that online research must be made accessible to all people in order to get a better sample and more vibrant data. While web accessibility is becoming more of a legal requirement for social media companies, there are still gaps in accessibility across platforms, and we therefore need greater technological innovation for social media and research tools. Sarah Lewthwaite used the ‘over the shoulder’ method (using a remote desktop and screen sharing) to observe how some people with disabilities access and use social media.




















Our final group of sessions was on image analysis.

Francesco D’Orazio discussed image (and more specifically, brand) coding and analysis using Pulsar, which works across a range of social media platforms, including twitter and Instagram. To conduct the analysis, an algorithm must be created, alongside human coders, to define certain concepts (i.e. image topics), search images, and tag them with the concepts before clustering them. Francesco believes that doing this form of image analysis can do more for a brand than simple logo detection.




















Finally, Yeran Sun discussed using images to map tourist hotspots. Yeran used Flickr (an often ignored platform for research), and geo-clustered images via their meta-data using R and QGIS (free and open to use) to show popular tourist destinations. Often, images will have longitude and latitude tags, which allow for precise mapping. If used effectively, geo-tagging such as this can be used to provide the ‘best’ route for tourists to see all the popular hotspots, or inversely, create ‘alternative’ routes for those who wish to stay away from popular tourist sites!