NOTE: This blog post, publish in February 2022, has been updated and expanded into a in-depth report on bridging-based ranking.
Algorithmic recommendation systems (also known as recommender systems and recommendation engines) are one of the primary ways that we navigate the deluge of information from products like YouTube, Facebook, Netflix, Amazon, and TikTok. We only have a finite amount of time and attention, and recommendation systems help allocate our attention across the zettabytes of data (trillions of gigabytes!) now produced each year.
The (simplistic) “evil recommendation system” story
Recommendation systems in the prominent tech companies stereotypically use what has become referred to as “engagement-based ranking.” They aim to predict which content will lead a user to engage the most—e.g., by interacting with the content or spending more time in the product. This content is ranked higher and is the most likely to be shown to the user. The idea is that this will lead to more time using the company’s product, and thus ultimately more time viewing ads.
While this may be good for business, and is relatively easy to implement, it is likely to be a rather harmful approach—it turns out that this leads people to produce more and more sensationalist and divisive content since that is what leads to the most engagement. This is potentially very dangerous for democratic stability—if things get too divisive, the social contract supporting a democracy can falter, potentially leading to internal warfare. (Caveat: for the sake of brevity, this is a heavily simplified account, and there may be evidence that in some countries this is less of a problem; and many non-ads based companies have similar incentives.)
Is the chronological feed a fix?
The perils of engagement-based ranking have led some advocates, policymakers, and even former tech employees to want to replace recommendation systems with chronological feeds: no more recommendations, just a list of posts in order by time. This appears to make sense at first glance. If recommendation systems place business interests over democratic stability, then it seems important to eliminate them before our democracy collapses!
However, this is where the story gets a bit more complicated. Chronological feeds address some of the problems with engagement-based ranking systems, but they cause many more. To understand why, we need to consider what recommendations systems do to society.
Recommendations determine “What is rewarded?”
At their core, recommendation systems decide what kinds of behavior a product will reward—and their impacts are the result of a very messy human world.
If we were all hyper-rational Spocks perhaps we would only pay attention to factual things that truly mattered—and engagement-based ranking would be great at rewarding the best content. However, we have good evolutionary reasons to pay attention when e.g., someone says that our families are in danger from a malevolent enemy—and so us messy humans can end up very engaged when we see sensationalism and divisiveness, regardless of its truthfulness.
So, sensationalism attracts attention. This drives engagement. Engagement-based ranking systems reward this with more attention. The resulting positive feedback loop provides a strong incentive for anyone desiring money, status, or power to produce sensationalist and divisive content. This includes politicians and journalists. Even worse, non-sensationalist content is far less likely to be seen, creating a very strong disincentive to produce grounded, nuanced, and fact-based content. All of this leads rapidly to a race to the bottom.
It is this combination of messy human psychology (what we pay attention to) and flawed societal incentives (our desire for attention and its rewards) that leads to harm—and engagement-based recommendations are just a particular way to increase the reward and thus the harm. (Foreshadowing: the “engagement-based” part is key here—as we will see, recommendations don't have to reward things that cause harm!)
Chronological feeds are a copout
Now we can return to why chronological feeds make some sense—and also why they don’t really make sense. On one hand, time-ordered feeds may help by bringing us closer to the baseline incentives for sensationalism.
On the other hand, if there existed a “divisiveness threshold”—a degree of divisive sensationalism that is simply too much for a democracy to survive—it is eminently plausible that one could get there without even needing engagement-based ranking to supercharge it. While it would be difficult to prove this, there are at least significant indications, e.g., in Kenya, India, and Brazil, that messaging systems (e.g. text messaging, WhatsApp, etc.) share many of the same problems as engagement-based social media—even though they use a chronological feed.
One of the reasons is likely that chronological feeds are essentially a simple form of recommendation systems, ranked by recency. We think of them as “recency-based recommendations.” Since recommendation systems determine “What is rewarded?” and recency is rewarded, this means that chronological feeds will primarily reward those who post the most! This is something that even engagement-based ranking systems can help mitigate, by not showing too many things from the same person. It’s not clear that this would lead to better outcomes (and there is evidence that at least in some cases, it may make things worse!).
Finally, it’s worth noting that chronological feeds are the most basic possible "fix". Yes, they avoid one of the harmful feedback loops caused by engagement-based recommendations—but a movement to chronological feeds doesn’t even attempt to mitigate and overcome the underlying problems.
We can do better.
An alternative "for good": Bridging-based ranking
What would it look like to create a sort of opposite to an engagement-based ranking system? Something that explicitly counters the “benefits” of sensationalism at capturing attention, and thus levels the initial playing field such that complexity and nuance have a fighting chance.
An example of what that might look like is a recommendation system that rewards content that bridges divides. Such a bridging-based ranking system might infer societal divisions, and reward content that decreases the extent of those divisions.
For example, imagine two potential articles that Twitter’s feed might show someone about immigration. One makes appears likely to increase divisions across opposite sides, another is more likely to decrease divisions. Engagement-based ranking would not try to take this into account—it simply factors in how likely one is to engage or stay on the app—which is likely higher for a divisive piece that leaves users ranting and doom-scrolling. Bridging-based ranking would instead reward the article that helps the opposing sides understand each other—that bridges the divide.
Another way to think about this is that engagement-based systems can be thought of as a form of “centrifugal ranking”—dividing us into mutually untrusted groups; in contrast, bridging-based ranking is a kind of “centripetal ranking”—helping re-integrate trust across groups. If we can do it well, the implications are staggering. Bridging-based ranking would reward the ideas and policies that can help bridge divides in our everyday lives, beyond just online platforms. Moreover, it can help change incentives on politicians—many of whom have been forced to play to whims of the engagement-based algorithms. It might even make journalism more profitable and more meaningful—no longer burying fact-based reporting under a sea of cheap sensationalism, and rewarding empathic and bridge-building journalistic practices. Finally, it may dramatically reduce the need for moderation. If division and hate are no longer rewarded with attention, there will be much less of it!
Of course, making bridging-based ranking possible and ubiquitous will not be easy. On the technical front, one of the biggest challenges to developing alternatives to engagement-based ranking systems is that many of the platforms and regions where the harms of recommendations are greatest are the same regions where software has limited capacity to understand the local languages. If bridging-based ranking required content analysis, that would limit its impact and potentially make it less effective for many formats such as audio and video. However, while challenging to implement, there are ways to adopt bridging-based ranking that are content-neutral. For example, like engagement-based ranking, it can rely primarily on modeling the patterns of interactions and sharing over time—just with a different purpose.
Bridging-based ranking, and this particular implementation approach is just one way to ensure that recommendations keep us well above the “divisiveness threshold” necessary for democracy. There are currently a number of different approaches that are being explored to develop recommendation systems that e.g. aim to reduce polarization. The underlying goal we must aim for—our north star—should be to ensure that ranking rewards content that “facilitates understanding, trust, and wise decision-making".
Transitioning to ranking systems more aligned with democracy is a logical progression for a new technology, and even specifically for a new means of societal communication. We have developed other institutions and practices, such as the professionalization of journalism and the development of evidence-gathering practices, in order to mitigate sensationalism and support democracy in the past. We can and must do it again in this new algorithmic arena.
Why “choose your own recommendation system” isn’t quite good enough.
However, while it is heartening that recommendation systems could be made supportive of democracy, does that mean that platforms should deploy such systems? To some, the answer is an obvious yes; to others that might appear heavy-handed (though arguably, this is no different from the status quo).
Some academics and policy makers advocate that individual users should be able to—by law—choose their own recommendation systems. This could be implemented by allowing users to choose amongst a fixed set of options for their recommendations, including e.g. chronological, engagement-based, and so on. Alternatively, platforms could be required to let third parties do ranking on their behalf (the “middleware” approach advocated by Fukuyama et al.).
This is the kind of approach I would be excited about—I would love full control over everything I see. However, just because I (perhaps selfishly) want complete control over my environment does not mean that giving me or everyone else that control is sufficient to support democracy.
As the proponents of middleware themselves state:
Empowering each individual to tailor their algorithms might encourage a further splitting of the American polity, allowing groups to more easily find voices that echo their own views, sources that confirm their factual beliefs, and political leaders that amplify their own fears.
They go on to argue that this “is outweighed by the dangers of concentrated platform power” but this is a false dichotomy. You can both devolve platform power and prevent societal division (the latter should be one of the primary aims of any proposal to support democracy in face of platform impacts). As I describe in “Towards Platform Democracy: Policymaking Beyond Corporate CEOs and Partisan Pressure”:
Many purported fixes to platform problems involve giving individual users more options; e.g. if you want to see less sensationalist and divisive content, a platform might let you tweak your personal recommendations. But this individual agency “solution” does not solve the collective problems that the sensationalism and divisiveness might cause for a community, nation, or the planet—and it could even make those problems worse.
As a concrete example, we probably won’t change engagement-based rankings problematic incentives for journalists and politicians by adopting a “choose your own ranking system” approach (unless they are all just variants on something like bridging-based ranking). As a result, I argue that alternative “platform democracy” approaches can be used to determine collectively what recommendation systems should reward—in ways that protect against autocratic abuse.
This may be particularly important because moving beyond engagement-based ranking might also negatively impact some platform’s business metrics (at least in the short run)—especially if it is competing with companies who are sticking with sensationalism. While government or shareholder pressure may be necessary, thankfully, a content-neutral approach is more likely to get buy-in from platform leadership as it could reduce intense pressure by governments around moderation issues (not to mention potentially dramatically reducing moderation costs).
Next steps
It would be folly to simply give up on recommendation systems. Instead, they must be harnessed to nurture an attention economy that is not tilted to reward sensationalists. We must at least level the playing field so that bridge-building has a fighting chance against divisiveness—and neither chronological feeds nor a “choose your own ranking system” are likely to get us there.
We need more rapid investment by both the public and private sectors in order to develop and scale viable approaches to bridging-based ranking as quickly as possible. We may also benefit from policy that explicitly seeks to encourage or enforce adoption. Finally, governance processes like platform democracy may be invaluable for giving the ultimate choice of ranking approach to the collective population being impacted and ensuring that such systems are not co-opted by those in power.
At the end of the day, we become what we reward. Let's reward effective bridge-building over engaging sensationalism and chronological spam.
***
As noted above, this blog post, publish in February 2022, has been updated and expanded into a in-depth report on bridging-based ranking.
Reach out for thoughts, feedback, and potential collaboration through the email listed at aviv.me. Keep in the loop via the newsletter and via twitter.
Ovadya, Aviv . “Can Algorithmic Recommendation Systems Be Good For Democracy? (Yes! & Chronological Feeds May Be Bad) .” February 4, 2022