The Attention Economy and the Collapse of Cognitive Autonomy
July 15, 2025 by Rai Hasen Masoud (F'27)
Check out the first in a series on social media from Denny Center Student Fellow Rai Hasen Masoud (F'27) to learn more about the attention economy and it's impacts on our democracy.
“My experience is what I agree to attend to.”
—William James
In the contemporary digital landscape, human attention has emerged as the principal object of economic capture and commodification. Referred to as the attention economy, this system treats cognitive focus as a scarce resource to be algorithmically extracted, packaged, and monetized by dominant technology platforms. While often framed in terms of convenience or engagement, the deeper implications of this model are structural and normative. The attention economy erodes core democratic values, undermining the cognitive autonomy, reflective reasoning, and informed citizenship necessary for healthy democratic societies. This paper argues that the commodification of attention is not merely a technological or cultural shift, but a political and ethical crisis requiring urgent intervention, and initiates a broader inquiry into the governance, ethics, and future of digital attention.
Attention Economy & How it Works
The attention economy is a system in which human attention, a finite and valuable resource, is treated as a commodity: captured, analyzed, and traded for profit by digital platforms and advertisers. In this model, attention becomes the currency, and platforms like Facebook, YouTube, Instagram, and TikTok act as brokers.
The core mechanism involved here is such that social media platforms offer “free” services but generate revenue by selling targeted digital advertising. Advertisers pay platforms to place ads in front of users who are most likely to engage with (and purchase) their products. Algorithms learn what content maximizes engagement (measured in time spent, clicks, shares, etc.) and feed users more of it. This boosts “attention supply,” which in turn increases the number of ad impressions that can be sold. Platforms harvest immense volumes of personal data, ranging from browsing patterns and social connections to location and micro-interactions (pauses, scrolls, hovers). This data is analyzed to micro-target ads with unprecedented precision. Advertising slots are sold in real-time programmatic auctions, where advertisers bid on the chance to show you an ad based on your profile. The more time you spend on a platform, the more auction opportunities it creates. Revenue is almost entirely tied to attention volume (how long you’re on the platform) and targeting accuracy (how well ads are matched to your interests).
It is important to note how lucrative of a market this attention economy is for many. The global digital advertising revenue is estimated at $567 billion in 2022 and expected to exceed $700 billion by 2025, with social media advertising alone accounting for nearly 35% of this figure. In 2022, Alphabet (Google/YouTube) earned $224 billion in ad revenue. Meta (Facebook/Instagram) earned nearly $117 billion in ad revenue the same year. These two companies together capture more than half of all global digital advertising dollars, highlighting not just the extreme centralization of economic power in the attention market, but also the attempt to wield economic power against competition.
Just last April, U.S. District Judge Leonie Brinkema in Alexandria, Virginia, ruled that Google illegally monopolized two key markets in online advertising: publisher ad servers and ad exchanges, stating the company willfully maintained monopoly power and harmed competition, publishers, and consumers. The ruling allows the U.S. Department of Justice (DOJ) to push for structural remedies, including a possible breakup of Google’s ad business, such as forcing the sale of Google Ad Manager. In a similar vein, tech giants use their economic power to generate political power. Tech companies spend tens of millions annually on lobbying such that Alphabet spent $11 million and Meta $19 million in 2022 alone to influence privacy laws, antitrust enforcement, and other regulations.
What’s at Stake?
Tim Wu, law professor at Columbia and author of The Attention Merchants, aptly observes, “We’re not paying for these services with money; we’re paying with our attention.” These algorithms are optimized not for truth or well-being, but for engagement—frequently achieved through outrage, anxiety, or sensationalism.
Platforms are not neutral conduits of information; they are engineered traps designed to monopolize our time and monetize our behavior through precision-targeted advertising. This structure creates direct incentives to amplify emotionally charged, polarizing, or misleading content, as these consistently outperform moderate, factual, or nuanced material. Ruwantissa Abeyratne’s essay, The Attention Economy and Commodifying the Human Mind, details how this model undermines the coherence of selfhood, writing that the “relentless pursuit of visibility engenders a state of perpetual self-surveillance,” where personal worth becomes contingent upon algorithmic validation rather than internal reflection. This erosion of authentic identity is not a cultural quirk but a cognitive and social consequence with negative implications for democratic societies.
Declining cognitive autonomy, defined as the capacity to consciously direct one’s own focus, poses a significant threat to psychological resilience, democratic deliberation, and the normative goals of liberal education. As Chris Hayes argues in The Sirens’ Call, sustained exposure to fragmented, emotionally charged stimuli degrades the brain’s ability to sustain deep thought, consolidate memories, and regulate emotions. This is not merely an individual health concern: it destabilizes the conditions necessary for informed citizenship, rational public discourse, and democratic participation. In this context, attention is no longer a private asset but a public good under threat.
These individual-level cognitive harms do not remain isolated. When aggregated across millions of users, they coalesce into collective outcomes: fragmented social trust, polarized communities, and weakened democratic norms. For example, a 2020 Pew Research Center survey found that 64% of Americans believed social media had a mostly negative effect on the direction of the country, primarily citing the amplification of divisiveness and decline in civil discourse as key concerns. This growing public apprehension is indicative of a broader recognition that platforms are not merely digital commons for communication, but influential shapers of public perception and behavior. The perception of harm extends beyond frustrations with incivility: it signals a loss of trust in the infrastructure of digital communication. When a majority of the public perceives social media as detrimental to national cohesion, it becomes increasingly difficult to foster collective deliberation, civic trust, and the preconditions for democratic consensus. In this sense, public concern operates not only as a reaction to personal digital experiences, but also as a barometer of deeper systemic dysfunction.
Let’s look at how political polarization has intensified alongside rising social media use. Research published in the Proceedings of the National Academy of Sciences found that affective polarization -defined as the extent to which people view opposing partisans with hostility- has nearly doubled in the U.S. since the mid-1990s, accelerating sharply in the era of algorithmic social media feeds. Studies also show that exposure to emotionally provocative political content increases partisan animosity, creating feedback loops that entrench ideological divides.
The consequences extend beyond rhetoric: the Center for Strategic and International Studies documented that the number of domestic terrorist attacks and plots in the United States increased from 67 incidents in 2017 to 110 in 2021, with experts attributing part of this surge to online radicalization and the echo chambers created by algorithmically curated content.
One concrete example of these dynamics in action is the January 6th, 2021, attack on the U.S. Capitol, where investigators found that social media platforms like Facebook and YouTube played central roles in spreading emotionally charged narratives of election fraud. These narratives were algorithmically amplified, contributing directly to mobilizing large crowds and inciting real-world violence.
A study published in Trends in Cognitive Sciences highlights that while the most polarized demographic in the U.S. is older adults (65+), who are less likely to use social media, this does not disprove the role of platforms in shaping polarization. Later experimental studies -such as the randomized Facebook deactivation experiment- found that even temporary disconnection from Facebook significantly reduced both issue-based and affective polarization among users, revealing a causal relationship between platform exposure and increased political hostility.
Algorithmic Weaponization of Emotions
The National Academy of Sciences analyzed 3 million Facebook and Twitter posts and found that every additional use of outgroup-related language (e.g., “liberals” or “Republicans”) increased the likelihood of a post being shared by 67%. These posts also generated more “angry” reactions and retweets, illustrating how divisive rhetoric is rewarded within the attention economy. Similarly, a 2021 MIT study demonstrated that highly partisan or sensationalist political tweets were 70% more likely to be retweeted than neutral ones, showing how social media algorithms systematically favor inflammatory content that corrodes norms of deliberative democracy.
Consider a congressional hearing broadcast live on C-SPAN: a setting characterized by hours of testimony, deliberation, and procedural nuance. Within moments, a single 20-second outburst, perhaps a sharp retort or moment of confrontation, is clipped, stripped of context, paired with dramatic audio, and circulated on Facebook or TikTok. The algorithm, fine-tuned to prioritize emotional intensity over informational depth, propels it to virality. For millions, that snippet becomes the event’s defining narrative—and, effectively, their entire memory of it.
The democratic process is reduced from substantive debate to a curated flash of outrage. This illustrates not only how the attention economy distorts consumption, but how it reshapes collective memory, reframes public priorities, and redefines what is politically salient. Individual vulnerabilities to attention-hijacking content, when scaled across society, produce systemic risks: eroding trust in institutions, fostering hostility toward political opponents, and, ultimately, destabilizing democratic governance.
Power, Platforms, and the New Monopolies of Mind
What distinguishes these firms is not just their reach, but the opacity of their algorithmic decision-making. Engagement-maximizing systems determine what rises to prominence and what is suppressed, based solely on predicted retention and ad profitability; disconnected from public interest, factual accuracy, or democratic values. The result is an asymmetrical information ecosystem where a handful of corporate actors wield disproportionate influence over collective consciousness.
This concentration of attentional power amplifies polarization, misinformation, and tribalism. Algorithmic models consistently privilege outrage and sensationalism, as these generate longer watch times and more frequent clicks, directly driving ad revenue. Empirical research demonstrates that false news spreads six times faster than truthful news on Twitter, driven largely by the emotional content that its algorithms prioritize.
Tim Wu warns that existing antitrust frameworks are fundamentally ill-suited to this challenge. Traditional competition law presumes price as the key variable, but in the attention economy, attention itself is the commodity—and harms emerge through degraded agency, corrupted public discourse, and cognitive manipulation.
Without a reframing of regulatory paradigms, these platforms risk becoming monopolies of the mind: environments where citizens are no longer autonomous curators of their worldviews but passive subjects whose perceptions are shaped by unseen commercial imperatives. If cognitive autonomy is to remain central to democratic life, it must be protected against the unchecked concentration of attentional power.
The Ethical Dimension: Consent and Exploitation
Should corporations have the right to manipulate human attention without meaningful, informed consent? Legally, the question remains unresolved; ethically, the implications are grave. As scholar and legal theorist Ruwantissa Abeyratne argues, the current architecture of the attention economy exploits fundamental cognitive vulnerabilities such as impulsivity, curiosity, and social comparison, without adequate disclosure or alternatives.
This asymmetry of understanding constitutes a kind of digital asymmetrical warfare, where predictive algorithms shape the individual mind into a battlefield. The ethical violation lies not only in the harvesting of attention, but in the obfuscation of how and why it is harvested. What appears as “choice” in digital interfaces is often a carefully designed illusion reinforcing compulsive engagement rather than supporting autonomous decision-making.
What is urgently needed is a legal recognition of “attentional intrusion”: a conceptual analogue to physical trespass or data privacy violations. Just as the law prohibits unauthorized entry into homes or misuse of personal data, it must recognize the mind as a protected domain. The exploitation of mental space without consent represents a profound violation of personal sovereignty.
Reclaiming the Cognitive Commons
In an era when platforms are systematically engineered to optimize the capture and retention of user attention, the implications extend far beyond individual well-being, such that they threaten the very foundations of democratic society. What is at stake is not merely screen time, but the capacity for cognitive self-governance, reflective reasoning, and democratic agency. As individual cognitive harms -such as shortened attention spans, emotional volatility, and susceptibility to misinformation- aggregate across society, they erode shared understandings and social trust, fueling polarization, and undermining democratic deliberation. The deterioration of individual cognitive autonomy thus becomes a collective crisis, destabilizing the institutions and norms on which democracy depends.
It is imperative, therefore, not only to reconceptualize attention as a public good under which a shared cognitive resource is essential to the functioning of liberal democracies, but to move urgently toward regulatory frameworks that protect attention from exploitative manipulation. This essay seeks to shift scholarly and policy discourse toward recognizing cognitive autonomy as a matter of urgent public interest for policymakers. Like property, privacy, or freedom of expression, the right to mental self-direction warrants legal and institutional safeguards. The attention economy is not a distant threat on the technological horizon; it is a present and escalating reality. The longer we delay rigorous examination and decisive regulation, the more difficult it will become to reclaim the intellectual and democratic fabric already compromised.