This is a blog post from my Masters work which was conducted in 2018-19. It is a re-post from the original site where the blog was held and was only slightly adjusted to the new website. If you are interested in the larger project you can read the thesis here.
Personalized web pages, search results and news feeds filter most of what we see online. You might recognize this from promoted or recommended content/products, but there is a lingering question: what are we not seeing? If the content I see is "made for me" is there not some content that isn’t making it to us?
This algorithmic sorting of content is commonly referred to as a filter bubble. In 2011, Eli Parisier (2011) explored this idea, discussing how the personalization of the web is shaping the content we see, and arguably us. His arguments builds from concerns that arose in 1995 when Nicholas Negroponte discussed the notion of the "Daily Me" - a future where individuals could create their own personal news catalogue that could be digitally sent to them (Negroponte, 1995). Jumping back to Parisier, we see how our agency has become intertwined with technology to create these personal feeds that extend beyond news (Parisier, 2011). Negroponte’s notion of users specifically choosing the content is not exactly as imagined with modern algorithms selecting content based on what we intentionally tell it (i.e. selecting preferences), and what it "secretly" collects (i.e. likes, shares, how long we look at something, etc).
Underlying this is the algorigthm, something I will try to quickly address. The concept of algorithmic filtering, profiling, and sorting is somewhat ambiguous. Algorithms are pervasive, and understanding them really varies towards context. With many algorithms are ‘blackboxed’ - or inaccessible to researchers - it becomes hard to even study them. These arguments linger in more recent conversations around AI which extend these processes into neural networks and machine learning. However, for this project (and blog post) I will maintain a focus on social media algorithms and the knowledge that we currently have around them. From this perspective, we can understand algorithms are curated lines of code that are designed to gather, analyze, sort, profile, and suggest information related to users on a platform (Lyon, 2009; van Dijck, 2013; Gillespie, 2014).
When trying to understand these ominous filter bubbles, maintaining a comprehension of the factors that create them is important. In case you haven't heard, the product of social media sites are you and me, the users. The sites are "free" for users, because they sell your data to advertisers (Srnicek, 2016). What gathers this data? The algorithms embedded into the platform. They track user actions on these sites harvesting data from any action on these platform some being likes, scrolls, clicks, curated content, reading user messages for keywords. When first recognizing these habits occuring, authors such as Mark Andrejevic discussed the concern of the algorithmic all seeing eye (Andrejevic, 2002). Beyond the privacy and surveillance concerns (which are critically important), others highlight how this data gathering and subsequent profiling establish information channels that can manipulate users perceptions or as Jose van Dijck (2013) discusses potentially our behaviour and thought.
While filter bubbles appear to be a quite alarming part of our digital culture, it is important to separate mounting concerns from growing discourse. Axel Bruns recent book (releasing September 16, 2019) explores this notion of the filter bubble. Bruns provides both a working definition and a strong critique of these concerns by recognizing the dichotomous role of user and system that shapes filter bubbles through communicative acts that share and exclude content based on the particularities of users within the system. For example, if you are sharing and liking specific content and choosing to ignore or actively remove other content from your feed, you will strengthen your filter bubble (Bruns, 2019). In this relationship, users and systems feed off of each other to ‘entrench’ themselves in the bubble. However, is this concern and hype potentially overrated?
Bruns (2019), alongside others (Dubois & Blank, 2018), argues that while these systems exist users are still exposed to content outside of their bubble. For example, we have friends on our social media who will share content we disagree with, or we engage with news outlets that publish articles that challenge our ideas. In the majority of current dialogue, filter bubbles are used in relation to moral panic, they are the scapegoats to other concerns. In actuality, Bruns (2019) argues that we need to think about what we do with the information we are gathering. For him the problem isn’t that there are hyperpartisan echo chambers or filter bubbles; it’s that there are hyperpartisan fringe groups that fundamentally reject, and actively fight, any mainstream societal and democratic consensus [...] The filter is in our heads. (2019)
So what is it? Are there filter bubbles? Is it make belief? What do we do next? Taking this knowledge back to the active research aspect of my project, it turns into arguments on what information is critical for the average user. Brun’s critique does not change the gameplan of addressing these issues. While slowly becoming a loaded term, education is centrally important for attacking these concerns. This should be coupled with raising awareness on potential of personalization, and more importantly, critically reflecting on the content that we are being shown in these spaces. Being able to challenge the systems we interact with, the information that we are seeing is important, but also understanding what knowledge to trust. We should try to engage in meaningful dialogue, consider multiple opinions, and reflect on where the information is coming from all of which leads to the larger questions of my MA, how do we do that in meaningful ways for a variety of audiences?
References:
Andrejevic, M. (2002). The work of being watched: Interactive media and the exploitation of self-disclosure. Critical Studies in Media Communication, 19(2), 230–248. https://doi.org/10.1080/07393180216561
Bruns, A. (2019). Are Filter Bubbles Real? Axel Bruns. Retrieved from https://www.youtube.com/watch?v=ouzPhoSSGYw
Dijck, J. van. (2013). Engineering Sociality in a Culture of Connectivity. Retrieved from https://www-oxfordscholarship-com.lib-ezproxy.concordia.ca/view/10.1093/acprof:oso/9780199970773.001.0001/acprof-9780199970773-chapter-1
Dubois, E., & Blank, G. (2018). The echo chamber is overstated: The moderating effect of political interest and diverse media. Information, Communication & Society, 21(5), 729–745. https://doi.org/10.1080/1369118X.2018.1428656
Gillespie, T. (2014). The Relevance of Algorithms. Retrieved from https://mitpress.universitypressscholarship.com/view/10.7551/mitpress/9780262525374.001.0001/upso-9780262525374-chapter-9
Lyon, D. (2009). Surveillance, power, and everyday life. The Oxford Handbook of Information and Communication Technologies. https://doi.org/10.1093/oxfordhb/9780199548798.003.0019
Negroponte, N. (1995). Being digital. Vintage Books.
Pariser, E. (2011). The Filter Bubble: What the Internet Is Hiding from You. Penguin Group.
Srnicek, N. (2016). Platform Capitalism. Cambridge, MA: Polity Press.
コメント