Plato's Cave = Today's Algorithm Bubbles. What's your filter bubble hiding from you?

 

 A person looking at his phoneAI-generated content may be incorrect.



From Plato's Cave to Algorithm Bubbles: A Philosophical Analysis of Digital Echo Chambers

Author & Compiled by: Joy, J.
Published: 2025


Abstract

This paper examines the striking parallels between Plato's Allegory of the Cave and contemporary algorithmic content curation systems. Drawing on classical philosophy and modern computational theory, I argue that social media algorithms create digital caves—personalised information environments that limit users' perception of reality in ways remarkably similar to Plato's ancient thought experiment. Through analysis of filter bubbles, recommendation systems, and epistemic isolation, this research demonstrates how 2,400-year-old philosophy remains urgently relevant to understanding our digital predicament.

Keywords: Plato, epistemology, algorithm bias, filter bubbles, social media, digital philosophy, information curation


1. Introduction

In Book VII of The Republic, Plato presents one of Western philosophy's most enduring metaphors: prisoners chained in a cave, perceiving only shadows on a wall, mistaking these projections for reality itself (Plato, 380 BCE/1991). While ancient in origin, this allegory has found unexpected resonance in the 21st century's digital landscape. As Pariser (2011) documented in his seminal work on filter bubbles, algorithmic curation systems increasingly determine what information individuals encounter online, effectively constructing personalized caves of curated content.

This research explores a fundamental question: Are modern social media users experiencing a technologically mediated version of Plato's cave? More specifically, I examine whether algorithmic content filtering creates epistemic conditions analogous to those Plato described—conditions where individuals mistake limited, curated information for comprehensive reality.

The stakes of this comparison extend beyond philosophical curiosity. If algorithmic systems function as digital cave walls, the implications for democratic discourse, knowledge acquisition, and human flourishing demand serious examination. As Sunstein (2017) argues, when citizens inhabit separate information universes, the foundations of shared civic life erode.


2. Plato's Cave: A Brief Recapitulation

2.1 The Allegory's Structure

Plato's cave allegory unfolds in stages. Prisoners, chained since childhood, face a cave wall. Behind them burns a fire, and between the fire and prisoners, people carry objects that cast shadows on the wall. The prisoners, unable to turn their heads, perceive only these shadows and mistake them for reality (Plato, 380 BCE/1991, pp. 514a-520a).

The allegory continues: if a prisoner were freed and forced to turn toward the fire and actual objects, the light would pain him. If dragged outside into sunlight, he would initially be blinded. Only gradually could he perceive actual objects, then reflections in water, and finally the sun itself—what Plato identifies as the Form of the Good, the ultimate truth (Plato, 380 BCE/1991).

2.2 Epistemological Implications

The cave allegory addresses fundamental questions about knowledge, perception, and reality. Plato distinguishes between doxa (opinion based on sensory experience) and episteme (true knowledge based on reason) (Annas, 1981). The prisoners possess only doxa—beliefs about shadows that, while consistent within their limited experience, fundamentally misrepresent reality.

Critically, Plato suggests that most humans live analogously to the prisoners. We mistake immediate sensory experience and cultural conditioning for objective reality, rarely questioning the limitations of our perceptual apparatus or social position (Reeve, 1988). The philosopher's task, then, becomes helping others recognize these limitations and ascend toward truth.

2.3 The Problem of Return

Plato's allegory concludes with a troubling observation: if the freed prisoner returns to inform others, they won't believe him. His eyes, adjusted to sunlight, now struggle in the cave's darkness. The other prisoners, comfortable with their shadow-reality, would mock him and resist any attempt at liberation (Plato, 380 BCE/1991, p. 517a).

This resistance to enlightenment—what we might call epistemic inertia—proves particularly relevant to contemporary digital discourse. As we shall see, users within algorithmic bubbles often resist information contradicting their curated reality, much as Plato's prisoners resist the freed man's testimony.


3. Algorithmic Curation and Filter Bubbles

3.1 How Algorithms Shape Information Diet

Modern social media platforms employ sophisticated recommendation algorithms designed to maximise user engagement. These systems analyse behavioural data—clicks, viewing duration, likes, shares—to predict and deliver content likely to hold attention (Bucher, 2018). As Gillespie (2014) notes, algorithms function as "calculated publics," constructing personalised information environments tailored to each user.

Facebook's News Feed algorithm, for instance, prioritises content from friends and pages with whom users frequently interact, while deprioritising content that generates negative reactions or disengagement (DeVito, 2017). YouTube's recommendation system, which is responsible for over 70% of viewing time, optimises for watch time through deep neural networks that predict which videos will keep users engaged on the platform (Covington et al., 2016).

The result is what Pariser (2011) terms a "filter bubble"—a personalised information ecosystem that reinforces existing interests and beliefs while filtering out contradictory or challenging content. As Bakshy et al. (2015) demonstrated in a large-scale study of Facebook users, algorithmic curation significantly reduces exposure to ideologically cross-cutting content, even controlling for users' own selection preferences.

3.2 The Echo Chamber Effect

Related to filter bubbles is the phenomenon of echo chambers—spaces where beliefs are amplified and reinforced through repetition and lack of contradiction (Sunstein, 2017). While Sunstein originally applied this concept to politically homogeneous online communities, algorithmic curation dramatically accelerates echo chamber formation.

Bail et al. (2018) found that exposure to opposing political views on social media can actually increase political polarisation rather than moderate it, suggesting that algorithmic curation may protect users from cognitive dissonance by limiting cross-cutting exposure. Similarly, Geschke et al. (2019) demonstrated that recommendation algorithms on YouTube systematically suggest increasingly extreme content, creating what they term "radicalisation pathways."

3.3 The Personalisation Paradox

Ironically, algorithmic personalisation—framed by platforms as enhancing user experience—may diminish epistemic autonomy. When users cannot see what the algorithm hides, they lose the capacity to recognise the limitations of their information environment (Bozdag, 2013). This creates what Introna (2016) calls "algorithmic opacity": users interact with information ecologies they neither understand nor control.

Eslami et al. (2015) found that most Facebook users were unaware that their News Feed was algorithmically curated rather than chronological. When informed of algorithmic filtering, users expressed concern about what they might be missing, yet few changed their platform usage. This suggests that even awareness of the filter bubble may not suffice to escape it—a dynamic strikingly similar to Plato's prisoners' resistance to liberation.


4. The Cave Allegory as Analytical Framework

4.1 Structural Parallels

The correspondence between Plato's cave and algorithmic bubbles operates at multiple levels:

The Cave Wall = The Feed
Just as prisoners see only the wall before them, users see only the algorithmically curated feed. Both represent limited, mediated presentations of a larger reality (Couldry & Mejias, 2019).

The Fire and Object-Carriers = Algorithms and Content Creators
The fire illuminates objects to cast shadows; algorithms illuminate certain content while leaving other content in darkness. The object-carriers control what shadows appear; content creators and platform algorithms control what appears in feeds (Bucher, 2018).

The Shadows = Curated Content
Prisoners mistake shadows for objects; users mistake feeds for "the internet." Both represent second-order representations—not reality itself, but selected projections of reality (Pariser, 2011).

The Chains = Engagement Optimisation
Prisoners are physically chained; users are psychologically bound by engagement-optimised design—infinite scroll, autoplay, notification systems—that make disengagement difficult (Harris, 2017).

The Inability to Turn = Algorithmic Opacity
Prisoners cannot turn to see the fire and objects; users cannot see the full corpus of content from which algorithms select. Both suffer from perceptual constraints not of their choosing (Introna, 2016).

4.2 Epistemic Limitations

Both cave and algorithm create epistemic environments characterised by:

Limited Information Access: Prisoners see only shadows; users see only algorithmically selected content. Both mistake partial information for complete information (Bakshy et al., 2015).

Confirmation Bias Amplification: The shadows prisoners see confirm their existing shadow-based worldview; algorithmic feeds disproportionately show content confirming existing beliefs (Bail et al., 2018).

Unrecognised Mediation: Prisoners don't realise shadows are projections; users often don't recognise algorithmic curation. Both treat mediated information as unmediated reality (Eslami et al., 2015).

Social Reinforcement: When all prisoners see the same shadows, they confirm each other's interpretations; when users within similar filter bubbles see similar content, they reinforce shared (but potentially mistaken) beliefs (Geschke et al., 2019).

4.3 The Liberation Problem

Plato's freed prisoner faces pain, confusion, and eventual rejection by fellow prisoners. Similarly, escaping algorithmic bubbles proves challenging:

Cognitive Dissonance: Exposure to dramatically different information environments—like the freed prisoner's encounter with sunlight—creates discomfort. Bail et al. (2018) found that such exposure can increase rather than decrease polarisation.

Loss of Community: The freed prisoner loses connection with those still in the cave. Users who leave filter bubbles risk social penalties—unfollowing, unfriending, exclusion from online communities (Sunstein, 2017).

Epistemic Isolation: The freed prisoner struggles to communicate what he's learned. Users exposed to diverse information may find it difficult to discuss their insights with those still within narrow bubbles, lacking shared reference points (Pariser, 2011).


5. Critical Differences and Limitations

5.1 Agency and Choice

A crucial difference distinguishes cave from algorithm: choice. Plato's prisoners are literally chained against their will. Social media users, by contrast, choose to use platforms and could theoretically leave or seek alternative information sources (Tufekci, 2018).

However, this distinction may be less clear-cut than it appears. As Zuboff (2019) argues, surveillance capitalism creates economic and social pressures that make platform use virtually mandatory for participation in contemporary life. Moreover, algorithmic manipulation of attention exploits cognitive vulnerabilities, potentially compromising genuine autonomy (Harris, 2017).

The question becomes: when algorithms exploit predictable cognitive biases to maximise engagement, do users exercise meaningful choice? Or are they, like Plato's prisoners, constrained by forces beyond their control—not physical chains, but psychological and social ones (Yeung, 2017)?

5.2 Multiple Caves vs. Single Cave

Plato describes one cave with prisoners sharing the same shadow-reality. Algorithmic systems create millions of personalised caves—each user inhabits a slightly (or dramatically) different information environment (Pariser, 2011). This proliferation of bubbles may actually exacerbate the problem: while Plato's prisoners could potentially collaborate to understand their shared situation, users in separate algorithmic bubbles lack even common ground for dialogue (Sunstein, 2017).

5.3 The Question of Truth

Plato's allegory assumes that objective truth (the Forms) exists outside the cave. The algorithmic bubble problem is complicated by postmodern scepticism about objective truth. If no external "sunlight" exists—if truth itself is constructed or perspectival—the cave allegory may prove less useful as an analytical framework (Vaidhyanathan, 2018).

However, one need not accept Platonic metaphysics to find the cave-algorithm parallel illuminating. Even without committing to absolute truth, we can recognise that algorithmically curated information environments are more limited, biased, and distorted than users typically realise—that they mistake partial perspectives for comprehensive understanding (Bozdag, 2013).


6. Empirical Evidence and Case Studies

6.1 The 2016 U.S. Presidential Election

The 2016 election provides a compelling case study of algorithmic bubbles in action. Pre-election polling consistently underestimated Trump's support, partly because pollsters and media figures inhabited information bubbles that minimized contact with Trump supporters (Silver, 2017). Simultaneously, Facebook's algorithm amplified sensationalist and often false content, creating separate epistemic universes for Clinton and Trump supporters (Allcott & Gentzkow, 2017).

Post-election analysis revealed that users within different algorithmic bubbles encountered fundamentally different narratives about the campaign. Silverman (2016) found that in the three months before the election, the top-performing false election stories on Facebook generated more engagement than top stories from major news outlets. Users within pro-Trump bubbles saw predominantly pro-Trump (and often false) content; users within pro-Clinton bubbles saw the opposite (Bakshy et al., 2015).

The result was a breakdown of shared reality—different Americans lived in different information caves, each convinced of their shadow version of truth. The surprise many experienced at Trump's victory reflected not merely incorrect predictions, but fundamental unawareness that millions of Americans inhabited entirely different information environments (Pariser, 2011).

6.2 YouTube Radicalisation Pathways

YouTube's recommendation algorithm presents another illuminating case. Ribeiro et al. (2020) analysed 72 million user comments across channels representing the "alt-right" spectrum, finding systematic radicalisation pathways. Users who commented on mainstream conservative channels had an elevated likelihood of later commenting on alt-right and white supremacist channels.

The mechanism is algorithmic: YouTube's recommendation system optimises for watch time, and increasingly extreme content generates longer viewing sessions (Tufekci, 2018). A user watching one conspiracy video receives recommendations for more extreme conspiracy content. Over time, the algorithm constructs a radicalisation cave—a personalised information environment progressively more extreme than the user's starting point (Geschke et al., 2019).

This phenomenon transcends politics. Chaslot (2019), a former Google engineer who worked on YouTube's recommendation algorithm, documented similar patterns across topics—health misinformation, flat earth theories, and anti-vaccination content. The algorithm doesn't have ideological commitments; it has engagement commitments. Extreme content is engaging; therefore, the algorithm creates extremist caves.

6.3 The COVID-19 Infodemic

The COVID-19 pandemic demonstrated how algorithmic bubbles affect public health outcomes. The WHO (2020) declared an "infodemic"—an overabundance of information, both accurate and false, making it difficult to find trustworthy guidance. Algorithmic curation amplified this problem by segregating users into distinct pandemic-interpretation caves.

Bridgman et al. (2020) found that exposure to social media increased the likelihood of believing COVID-19 misinformation, with algorithmic recommendation systems preferentially surfacing sensationalist (and often false) pandemic content over accurate health guidance. Users within anti-vaccination bubbles received predominantly vaccine-sceptical content; users within pro-science bubbles received the opposite (Burki, 2020).

The consequences were tangible: belief in misinformation correlated with reduced compliance with public health measures (Roozenbeek et al., 2020). Algorithmic caves weren't merely epistemic problems—they became public health crises as users' shadow-versions of pandemic reality determined real-world behaviour.


7. Philosophical and Ethical Implications

7.1 Autonomy and Authenticity

Plato's cave allegory raises questions about autonomy: can prisoners choose freely when their entire perceptual framework is constrained? Contemporary algorithms pose similar challenges. If users' preferences are shaped by algorithmically curated experiences, are those preferences genuinely autonomous (Yeung, 2017)?

Frankfurt (1971) distinguishes between first-order desires (wanting X) and second-order desires (wanting to want X). Authentic autonomy requires alignment between these levels. But if algorithms manipulate first-order desires through engagement optimisation, users may develop preferences they wouldn't endorse upon reflection—preferences not authentically their own (Harris, 2017).

This suggests algorithmic bubbles threaten not merely knowledge, but authentic selfhood. Users become alienated from their own desire formation, preferring what the algorithm has trained them to prefer rather than what they would choose in a less manipulated environment (Zuboff, 2019).

7.2 Democratic Deliberation

Democracy requires citizens to engage with diverse perspectives, encounter challenges to their views, and deliberate about common goods (Habermas, 1984). Plato was sceptical of democracy for precisely this reason: if most people are prisoners in caves, how can collective deliberation yield wisdom?

Modern algorithmic bubbles vindicate Plato's concern while presenting a more ominous possibility: not one shared cave, but millions of personalised caves preventing even the possibility of common deliberation (Sunstein, 2017). When citizens inhabit incompatible information environments—when they literally cannot agree on basic facts because their algorithms show them different "facts"—democratic discourse collapses (Vaidhyanathan, 2018).

Benkler et al. (2018) document this collapse in their study of media ecosystems during the 2016 election. They found not merely polarisation, but the emergence of separate, incompatible epistemic communities with no shared information sources. This represents a crisis not merely of knowledge, but of democratic possibility: self-governance requires shared reality.

7.3 The Responsibility Question

Who bears responsibility for algorithmic caves? Platform designers who create engagement-maximising algorithms? Users who choose to use platforms? Content creators who game algorithmic systems? Advertisers who fund algorithmic manipulation?

Plato's allegory suggests a role for the philosopher—the freed prisoner who returns to liberate others. But unlike Plato's scenario, where liberation requires only education, algorithmic liberation may require structural change. Individual awareness, while valuable, cannot overcome systemic algorithmic architecture designed to create and maintain bubbles (Tufekci, 2018).

This suggests that addressing algorithmic caves requires more than user education or philosophical wisdom. It requires platform regulation, algorithmic transparency, and possibly alternative social media architectures that prioritise epistemic breadth over engagement maximisation (Zuboff, 2019).


8. Escape Strategies: Leaving the Digital Cave

8.1 Individual Strategies

Despite systemic constraints, individuals can take steps to broaden their information environments:

Algorithmic Awareness: Recognising that feeds are curated, not comprehensive, represents the first step—analogous to the freed prisoner's initial realisation that shadows aren't reality (Eslami et al., 2015).

Intentional Diversification: Deliberately seeking sources outside one's bubble—following ideologically diverse accounts, using multiple platforms with different algorithmic logics, consulting traditional media alongside social media (Bakshy et al., 2015).

Chronological Feeds: When available, choosing chronological over algorithmic feeds reduces curation effects. Twitter and Instagram offer this option, though most users don't activate it (DeVito, 2017).

Direct Source Consultation: Going directly to sources rather than relying on algorithmic suggestions—typing URLs rather than following recommendations, searching deliberately rather than browsing passively (Pariser, 2011).

Algorithmic Detox: Periodically taking breaks from algorithmic platforms to recalibrate attention and preference formation (Harris, 2017).

8.2 Platform-Level Interventions

Some advocate for platform design changes to reduce bubble effects:

Algorithmic Transparency: Requiring platforms to disclose how algorithms select content could enable users to make more informed choices (Eslami et al., 2015).

Diversity Metrics: Platforms could optimise not merely for engagement, but for exposure to diverse perspectives (Helberger et al., 2018).

Friction Design: Deliberately slowing interaction—requiring confirmation before sharing, displaying content diversity indicators—could reduce impulsive bubble-reinforcing behaviour (Lorenz-Spreen et al., 2019).

User Control: Giving users meaningful control over algorithmic parameters could enable customisation beyond platform defaults (Rader & Gray, 2015).

However, platform incentives work against such changes. Engagement maximisation drives advertising revenue; epistemic diversity does not. This suggests voluntary platform reform may prove insufficient (Zuboff, 2019).

8.3 Regulatory and Structural Approaches

More fundamental interventions might include:

Algorithmic Auditing: Mandatory third-party auditing of recommendation algorithms for bubble-creation effects (Sandvig et al., 2014).

Interoperability Requirements: Requiring platforms to enable cross-platform communication could reduce lock-in effects that trap users in single algorithmic environments (Crémer et al., 2019).

Public Interest Algorithms: Developing non-commercial social media platforms optimised for democratic deliberation rather than engagement (Tufekci, 2018).

Digital Literacy Education: Systematic education in recognising and resisting algorithmic manipulation, beginning in schools (Wineburg & McGrew, 2019).

Anti-Trust Enforcement: Breaking up platform monopolies could create competitive pressure for less manipulative algorithmic designs (Wu, 2018).


9. Theoretical Synthesis: Beyond Plato

While Plato's cave provides a powerful framework for understanding algorithmic bubbles, we must recognise its limitations and consider complementary perspectives.

9.1 Foucault and Disciplinary Power

Foucault's (1977) analysis of disciplinary power offers additional insight. He argues that modern power operates not through overt coercion but through normalisation—shaping subjects who internalise and self-impose control. Algorithmic systems exemplify this: they don't force users to consume particular content, but shape desire formation such that users voluntarily choose algorithmically preferred content (Cheney-Lippold, 2017).

This suggests algorithmic caves differ from Plato's in a crucial respect: prisoners recognise their chains, while users internalise algorithmic preferences as their authentic desires. The cave becomes invisible precisely because it's experienced as freedom (Yeung, 2017).

9.2 Habermas and the Public Sphere

Habermas's (1989) concept of the public sphere—a space where citizens deliberate about common concerns—helps explain the democratic stakes. For Habermas, rational deliberation requires certain conditions: accessibility, inclusivity, and shared concern for truth rather than strategic manipulation.

Algorithmic bubbles undermine all three conditions. When citizens occupy separate epistemic environments, deliberation becomes impossible. Even when users communicate across bubbles, algorithmic amplification of extreme content rewards strategic manipulation over truth-seeking (Benkler et al., 2018). The public sphere fragments into incompatible public sphericules, each with its own reality (Sunstein, 2017).

9.3 Arendt and the Common World

Arendt (1958) argues that politics requires a "common world"—a shared reality enabling plurality and difference to coexist productively. Totalitarianism, she suggests, destroys this common world by making truth and reality disappear into ideology.

Algorithmic systems may not be totalitarian, but they threaten the common world Arendt considers essential for politics. When Americans cannot agree that a pandemic is occurring, or that climate change is real, or even on what events transpired yesterday—when algorithms show each user a different reality—politics becomes impossible. We're left with what Arendt calls "worldlessness": the inability to share a common human existence (Vaidhyanathan, 2018).


10. Conclusion: The Unfinished Allegory

Plato's cave allegory concludes ambiguously. The freed prisoner returns to help others, but faces mockery and resistance. Plato suggests that if the prisoners could kill the liberator, they would (Plato, 380 BCE/1991, p. 517a). The allegory offers no guarantee of successful enlightenment—only the philosophical imperative to try.

Our digital moment mirrors this ambiguity. We increasingly recognise algorithmic bubbles' epistemic and democratic dangers. Yet platforms have powerful incentives to maintain and deepen these bubbles. Users, meanwhile, often prefer the comfort of algorithmically curated reality to the challenge of genuine epistemic diversity (Eslami et al., 2015).

The question becomes: will we remain in our algorithmic caves, or undertake the difficult work of liberation? Will we, like Plato's freed prisoner, return to help others escape—or remain outside, unable to communicate across bubble boundaries?

I suggest that answering these questions requires both philosophical wisdom and technical intervention. Plato's allegory reminds us that the problem is ultimately human: our tendency to mistake limited information for complete reality, our preference for comfortable falsehoods over uncomfortable truths, our resistance to perspectives challenging our worldview.

But unlike Plato's prisoners, we possess tools for restructuring our information environments. We can demand algorithmic transparency. We can choose epistemic diversity over engagement optimisation. We can build alternative platforms. We can teach digital literacy. We can regulate monopolistic platforms that profit from our imprisonment.

The cave is no longer stone and chains, but code and algorithms. But the imperative remains unchanged: we must recognise our imprisonment, seek the light of broader understanding, and help others do the same. The shadows on the wall are more sophisticated than Plato imagined—but so too are our capacities for liberation.

What ancient Athens learned, and what we must relearn: democracy cannot survive when citizens inhabit separate realities. If algorithmic caves continue to fragment our common world, self-governance becomes impossible. Escaping these caves is not merely an individual epistemic project—it's a collective political necessity.

Plato's allegory remains unfinished because liberation is never complete. Each generation faces its own caves, its own shadows, its own chains. Ours happen to be digital. The philosophical task—recognizing illusion, seeking truth, building common worlds—endures.


References

Allcott, H., & Gentzkow, M. (2017). Social media and fake news in the 2016 election. Journal of Economic Perspectives, 31(2), 211-236.

Annas, J. (1981). An introduction to Plato's Republic. Oxford University Press.

Arendt, H. (1958). The human condition. University of Chicago Press.

Bail, C. A., Argyle, L. P., Brown, T. W., Bumpus, J. P., Chen, H., Hunzaker, M. F., ... & Volfovsky, A. (2018). Exposure to opposing views on social media can increase political polarization. Proceedings of the National Academy of Sciences, 115(37), 9216-9221.

Bakshy, E., Messing, S., & Adamic, L. A. (2015). Exposure to ideologically diverse news and opinion on Facebook. Science, 348(6239), 1130-1132.

Benkler, Y., Faris, R., & Roberts, H. (2018). Network propaganda: Manipulation, disinformation, and radicalization in American politics. Oxford University Press.

Bozdag, E. (2013). Bias in algorithmic filtering and personalization. Ethics and Information Technology, 15(3), 209-227.

Bridgman, A., Merkley, E., Loewen, P. J., Owen, T., Ruths, D., Teichmann, L., & Zhilin, O. (2020). The causes and consequences of COVID-19 misperceptions: Understanding the role of news and social media. Harvard Kennedy School Misinformation Review, 1(3).

Bucher, T. (2018). If...then: Algorithmic power and politics. Oxford University Press.

Burki, T. (2020). The online anti-vaccine movement in the age of COVID-19. The Lancet Digital Health, 2(10), e504-e505.

Chaslot, G. (2019). How algorithms can learn to discredit "the media". Medium. https://medium.com/@guillaumechaslot/how-algorithms-can-learn-to-discredit-the-media-d1360157c4fa

Cheney-Lippold, J. (2017). We are data: Algorithms and the making of our digital selves. New York University Press.

Couldry, N., & Mejias, U. A. (2019). The costs of connection: How data is colonizing human life and appropriating it for capitalism. Stanford University Press.

Covington, P., Adams, J., & Sargin, E. (2016). Deep neural networks for YouTube recommendations. Proceedings of the 10th ACM Conference on Recommender Systems, 191-198.

Crémer, J., de Montjoye, Y. A., & Schweitzer, H. (2019). Competition policy for the digital era. European Commission.

DeVito, M. A. (2017). From editors to algorithms: A values-based approach to understanding story selection in the Facebook news feed. Digital Journalism, 5(6), 753-773.

Eslami, M., Rickman, A., Vaccaro, K., Aleyasen, A., Vuong, A., Karahalios, K., ... & Sandvig, C. (2015). "I always assumed that I wasn't really that close to [her]": Reasoning about invisible algorithms in news feeds. Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 153-162.

Foucault, M. (1977). Discipline and punish: The birth of the prison. Vintage Books.

Frankfurt, H. G. (1971). Freedom of the will and the concept of a person. The Journal of Philosophy, 68(1), 5-20.

Geschke, D., Lorenz, J., & Holtz, P. (2019). The triple-filter bubble: Using agent-based modelling to test a meta-theoretical framework for the emergence of filter bubbles and echo chambers. British Journal of Social Psychology, 58(1), 129-149.

Gillespie, T. (2014). The relevance of algorithms. In T. Gillespie, P. J. Boczkowski, & K. A. Foot (Eds.), Media technologies: Essays on communication, materiality, and society (pp. 167-194). MIT Press.

Habermas, J. (1984). The theory of communicative action, Volume 1: Reason and the rationalization of society. Beacon Press.

Habermas, J. (1989). The structural transformation of the public sphere: An inquiry into a category of bourgeois society. MIT Press.

Harris, T. (2017). How technology is hijacking your mind—from a magician and Google design ethicist. Thrive Global. https://medium.com/thrive-global/how-technology-hijacks-peoples-minds-from-a-magician-and-google-s-design-ethicist-56d62ef5edf3

Helberger, N., Karppinen, K., & D'Acunto, L. (2018). Exposure diversity as a design principle for recommender systems. Information, Communication & Society, 21(2), 191-207.

Introna, L. D. (2016). Algorithms, governance, and governmentality: On governing academic writing. Science, Technology, & Human Values, 41(1), 17-49.

Lorenz-Spreen, P., Mønsted, B. M., Hövel, P., & Lehmann, S. (2019). Accelerating dynamics of collective attention. Nature Communications, 10(1), 1-9.

Pariser, E. (2011). The filter bubble: What the Internet is hiding from you. Penguin Press.

Plato. (1991). The Republic (A. Bloom, Trans.). Basic Books. (Original work published ca. 380 BCE)

Rader, E., & Gray, R. (2015). Understanding user beliefs about algorithmic curation in the Facebook news feed. Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 173-182.

Reeve, C. D. C. (1988). Philosopher-kings: The argument of Plato's Republic. Princeton University Press.

Ribeiro, M. H., Ottoni, R., West, R., Almeida, V. A., & Meira Jr, W. (2020). Auditing radicalisation pathways on YouTube. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 131-141.

Roozenbeek, J., Schneider, C. R., Dryhurst, S., Kerr, J., Freeman, A. L., Recchia, G., ... & van der Linden, S. (2020). Susceptibility to misinformation about COVID-19 around the world. Royal Society Open Science, 7(10), 201199.

Sandvig, C., Hamilton, K., Karahalios, K., & Langbort, C. (2014). Auditing algorithms: Research methods for detecting discrimination on internet platforms. Data and Discrimination: Converting Critical Concerns into Productive Inquiry, 22, 4349-4357.

Silver, N. (2017). The real story of 2016. FiveThirtyEight. https://fivethirtyeight.com/features/the-real-story-of-2016/

Silverman, C. (2016). This analysis shows how viral fake election news stories outperformed real news on Facebook. BuzzFeed News. https://www.buzz

-----------------------------------------------------------------------------------------------------------------------------

@JerriusCogitator Research Series

Comments