Does Facebook cause Polarization?

A slurry of recent articles assert that Facebook increases polarization and perhap radicalization among its users. Facebook is circling the wagons to counter this narrative, even asking its employees to help push back. After all, if they claim to be the company that brings people together, it would be unfortunate to be widely seen as having the opposite effect.

The first step to understand how Facebook may or may not polarize us is to look at how recommendation engines work. The following is more or less true of Twitter and YouTube (Google) and basically any ad-driven network that makes more money by keeping us actively engaged.

If a site wants to better monopolize our time and attention, it will measure what we click and how well that content engages us. Algorithms then try to guess what we might want based on our recorded patterns, which forms a map of our likes and dislikes. When other people, even total strangers, match our patterns and they like something new to us, that item is more likely to be recommended to us, on the chance we’ll like it too. The more successful the algorithm is, the more prominent it becomes, optimizing for this key quality: our attention.

This is not the only way to build a recommendation engine, but it’s typical. I hope it’s easy to see why it would, on the whole, be effective.

Then there’s the emotional component to consider. Content that pushes our emotional buttons is much more likely to motivate us torecommend (e.g., ) to others who follow (i.e., ) us.

Someone may become understandably incensed by stories on abortion, crime, healthcare, cancel-culture, gaming, human rights, sex, the military, the economy, racism, or just plain hateful language, to name a few.

Is one political philosophy more polarizing than the others?

Attempting to answer this directly may ironically add to any polarization and make it harder to bridge the divide, especially if each side is certain the others are extreme and we’re just being reasonable. So let’s try something else.

Consider that if we all understood reality perfectly, and reality is indeed consistent for all, then we’d all agree on ground truth. That seems like a simpler truth to hold. But it could still be undermined by anyone with intent to mislead people or otherwise undermine reasonable discourse. If there is a difference in the political persuasions, it may be on that front first.

I will at least validate the intuition that the stories, posts and videos have the greatest inclination towards extremism.

How do we know this is true?

Long-standing social and political issues are inevitably complicated and nuanced, even where we’re sure ofthe right answers. Other people will have their own answers and believe those just as strongly, even if one or more sides is likely to be wrong.

Better authored stories and videos tend to acknowledge facts from multiple sides, not for the sake of false equivalencies, but as a part of listening to what real people experience and know. They ask key questions, like: “how do we know this is true?” And they attempt to answer those questions with more facts and experts.

This approach results in more thoughtful and less extreme narratives. It’s what good journalists are trained to do (and what less dilligent ones feign with their ‘whataboutisms,’ on the false assumption that if everyone does equally wrong then no one can be called out for it).

Alas, these well-written stories, posts and videos tend to not be the ones we keep clicking on all day. They take more effort to process and the resulting calls to action are often less clear. In other words, they do less for the immediate emotional feedback-loop than the stories where someone needs to obviously STFU now or go to prison. Simpler messages, like “They cancelled Dr. Seuss!” do better to engage us. (Fox News lacks the immediate feedback of Facebook’s “like” button, but they do have lots of data on their audience…)

Facebook’s algorithms will automatically favor the latter, more enraging, kind of stories because that’s where the data shows we are most engaged. The more engaging the stories, the more we will click and remain on the site, which is the real goal.

Consider our “doom-scrolling” throughout most of 2020.

It turns out we’re even more susceptible to advertising persuasion while we’re in a hyper-vigilant semi-paralytic mental state, even if our outward mood may be rather depressed. (The small endorphin rush from receiving welcome packages during our stay-at-home period has sustained many businesses, but mostly Amazon…)

The net result is that we will see more of the same of whatever gets us most emotionally charged. And if that happens to be conspiracy theories about government or elections, then one can easily see how it begins with some well-connected criminals being let off the hook (all too common) and end with bizarre ideas of inhuman politicians eating babies under pizza joints (false).

Companies like Facebook obsess over their collected data. They do everything possible to maximize our attractiveness to their paying customers, the companies bidding to place ads on the network.

It’s at this point that we pause to point out that the core problem is not with Facebook’s rank-and-file employees, their motivations, ethics or morals.

The algorithms must optimize for our attention because that’s how the company currently makes money. So the core problem is with the business model that thrives on monopolizing our attention. If I fault Facebook for anything, it’s over their unwillingness at various levels to confront (or event admit) this central truth.

If any company does well at this business model, the side-effects are just that. They can voluntarily restrain their algorithms over ethical concerns, but the negatives will creep back in from somewhere else. Investors expect profits to rise each year. Advertisers want to see better return on their investment. So something somewhere has to ratchet up our attentiveness and attractiveness for the real paying customers, and this has some anti-social side-effects.

Consider other businesses that don’t (as of today) try to monetize and monopolize our attention:

Imagine a fitness club franchise that was supported through advertising alone. To maximize ad revenue, they’d want us to remain engaged, no matter how exhausted or busy we might be. This push might necessarily extend to our lives outside the gym, so they could serve more ads and get us back in the gym as soon as possible.

In reality, with the cost of exercise machines and accessible floorspace, only a fraction of their membership can practically be present at any given time. In reality, they don’t want us all to show up at the same time or they’d fail with a poor quality of service, long wait times and low overall satisfaction. But you imagine such a service virtually, using VR devices you’d own…

Now imagine a supermarket where the products were paid entirely through advertising (note: money does already flow from marketing budgets to increase advantageous placement on store shelves and end-caps). Such a grocery would not want to see us loading up on the best or most expensive products and heading out the door. They’d more likely try to keep us on-site longer, consuming cheaper-to-source items for as long as possible (see coffee shops inside grocery stores for a prototype).

In practice, such a supermarket would be like the world’s worst all-inclusive vacation club, without the pool. And of course, it would also not survive as long as the products had natural cost and scarcity. In XR, perhaps…

So now imagine a future pair of “smart” glasses that were paid for (or at least heavily subsidized) by advertising. The vendor would want us wearing the device all day, collecting data about our intentions and reactions in the real world and using that data to influence ad placements and future buying decisions. They way they’d make money is by auctioning off our time and attention. It’s not unlike Facebook does today, but here the data collection is even more intimate and their presence on our faces guarantees our attention.

Facebook survives and even thrives because their model can scale, and serve all customers web content 24x7. Television survived on the advertising model because the costs did scale, if not as well as cable. Newspapers are moving to subscriptions to survive because their costs and ad revenues don’t scale.

So back to the original question, does Facebook cause polarization?

Everything we see says: “yes, it certainly does.”

Does Facebook care?

I believe they really do. I believe they want their business model to have no unfortunate side effects, though it does.

But will they change?

It’s unlikely with the current ad-driven business model that they can change much, which is why they don’t openly consider it. They can only slow down the worse parts or try to compensate in other ways that don’t hurt the bottom line. They will try that until they fail.

Now, if Facebook moved to subscription-only, it would solve many of these problems rapidly (not the content rating, though other on-line communities do self-regulate better already).

But how much would you pay for that?

Design and Technology Leader (fmr. HoloLens, Apple, Google Earth, Second Life, Disney VR) Profile photo is from generated.photos (read “Who owns YOU?” for why)

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store