Social media data pose pitfalls for studying behaviour
A growing number of academic researchers are mining social media data to learn about both online and offline human behaviour. In recent years, studies have claimed the ability to predict everything from summer blockbusters to fluctuations in the stock market.
But mounting evidence of flaws in many of these studies points to a need for researchers to be wary of serious pitfalls that arise when working with huge social media data sets, according to computer scientists at 缅北强奸 in Montreal and Carnegie Mellon University in Pittsburgh.
Such erroneous results can have huge implications: thousands of research papers each year are now based on data gleaned from social media. 鈥淢any of these papers are used to inform and justify decisions and investments among the public and in industry and government,鈥 says Derek Ruths, an assistant professor in 缅北强奸鈥檚 School of Computer Science.
In an article published in the Nov. 28 issue of the journal Science, Ruths and J眉rgen Pfeffer of Carnegie Mellon鈥檚 Institute for Software Research highlight several issues involved in using social media data sets 鈥 along with strategies to address them. Among the challenges:
- Different social media platforms attract different users 鈥 Pinterest, for example, is dominated by females aged 25-34 鈥 yet researchers rarely correct for the distorted picture these populations can produce.
- Publicly available data feeds used in social media research don鈥檛 always provide an accurate representation of the platform鈥檚 overall data 鈥 and researchers are generally in the dark about when and how social media providers filter their data streams.
- The design of social media platforms can dictate how users behave and, therefore, what behaviour can be measured. For instance, on Facebook the absence of a 鈥渄islike鈥 button makes negative responses to content harder to detect than positive 鈥渓ikes鈥.
- Large numbers of spammers and bots, which masquerade as normal users on social media, get mistakenly incorporated into many measurements and predictions of human behaviour.
- Researchers often report results for groups of easy-to-classify users, topics, and events, making new methods seem more accurate than they actually are. For instance, efforts to infer political orientation of Twitter users achieve barely 65% accuracy for typical users 鈥 even though studies (focusing on politically active users) have claimed 90% accuracy.
Many of these problems have well-known solutions from other fields such as epidemiology, statistics, and machine learning, Ruths and Pfeffer write. 鈥淭he common thread in all these issues is the need for researchers to be more acutely aware of what they鈥檙e actually analyzing when working with social media data,鈥 Ruths says.
Social scientists have honed their techniques and standards to deal with this sort of challenge before. 鈥淭he infamous 鈥楧ewey Defeats Truman鈥 headline of 1948 stemmed from telephone surveys that under-sampled Truman supporters in the general population,鈥 Ruths notes. 鈥漅ather than permanently discrediting the practice of polling, that glaring error led to today鈥檚 more sophisticated techniques, higher standards, and more accurate polls. Now, we鈥檙e poised at a similar technological inflection point. By tackling the issues we face, we鈥檒l be able to realize the tremendous potential for good promised by social media-based research.鈥
---------------听
聽鈥淪ocial Media for Large Studies of Behavior,鈥 Ruths and Pfeffer, Science, Nov. 28, 2014.聽
听听