Echo Chambers and the Nature of Online Information

by Rowan Gledhill

The Society Library
16 min readNov 29, 2019
Artwork by KEILANA HOFFSTETTER

Netflix recently released a new documentary about Cambridge Analytica called The Great Hack. Primarily following Brittany Kaiser, a former director of business development at Cambridge Analytica, and her experience as the company’s global impact came to light, the film raises numerous important questions about the nature of information online. How do we know if what we’re reading is true? How do we know who created it?

Without recognizing the potential for misinformation and malicious interference campaigns, we risk ceding our public forums to companies, political parties, and foreign adversaries. Cambridge Analytica’s influence is only one example of the various ways private groups can manipulate information and secretly control what we see and focus on.

Worse still, due to the prevalence of echo chambers on social media, our resultant vulnerability to influence campaigns, and the heavy partisanship in US society, we face a broad threat against our privacy and ability to think freely. What are the factors working against us and how do we combat them?

Echo chambers are created when an individual follows or “friends”, purposefully or otherwise, almost exclusively like-minded people. Thus, when said individual checks their page/stream, they see more of the political opinions they already agree with. Though such an environment can be very productive for organizing movements among pre-existing bases, there may be little to no interplay with other views or the people who hold them.

Though echo chambers are created in part due to our own self-selection — for one reason or another, we tend to want to talk to those who already agree with us — they’re exacerbated by the suggestion and sorting algorithms of the social media outlet in question. Those algorithms set out to increase usership of the outlet, which in most cases means showing “similar posts” or like-minded accounts. Troublingly, these algorithms give more credit to that which provokes the most responses, not what is true or most topical, which in turn encourages and rewards ‘trolling’ (saying something with the sole purpose of irritating or offending someone or some group) since those posts tend to get the most responses.

Echo chambers can be beneficial for mobilization purposes, like organizing protests or creating activism groups, but in a grander sense, these environments push people towards more extreme political beliefs. If someone spends all their time reading view-affirming articles and posts, they may begin to lose touch with what the rest of the population believes and sees.

This trend has, in part, also given rise to the term “Fake News” being used to describe articles or news sources that present opposing views. While genuinely fake news, claims which are false or deliberately misleading, has always been an issue in information sharing, in our current political environment, “fake news” is often used to discredit opposing views, not just that which is false.

Likewise, without being introduced to other points of view, an individual’s beliefs may never be challenged. In debate, it’s important to be open to other ideas and to have some level of self-reflection and correction, but if one’s beliefs are never challenged, the individual may believe they’re 100% correct about all of their beliefs — a rarity for virtually everyone. Even when confronted with information from outside one’s echo chamber, users may dismiss the opposing beliefs as “fake news” — a label that can effectively discredit any article that one doesn’t believe, even if it’s factually accurate.

It’s good to question our views, that’s how we refine our opinions, but in an online community where any mistake is catalogued and saved for potential proof of an individual’s incompetence, being wrong threatens an individual’s standing in said group and their ability to participate in the future. Rather than promoting an online community set on revealing truth or debating policy, such retributive action encourages an environment where users parrot the same few talking points and attack those who don’t agree. The communities then become more isolated and more extreme, as all other viewpoints are pushed out.

In response to the so-called ‘gate-keeping’ in social media, there are also platforms that set out to create a sort-of “safe space” for a specific community. For example, the Intellectual Dark Web (I.D.W.), a term coined by Eric Weinstein, is a community of people who feel, rightfully or not, that they have been pushed from the mainstream outlets for their counter-cultural views — often citing ‘political correctness’ as a barrier blocking free speech and the free exchange of ideas. Some notable individuals who consider themselves a part of the I.D.W. are Ben Shapiro, Jordan Peterson, Sam Harris.

Individuals in the I.D.W. tend to perceive themselves as ostracized or censured for their views, though it’s worth noting that many of them rose to prominence only after being driven from their original media outlets or communities — and their popularity now could be argued to mostly be due to their “controversial” views, rather than in spite of them.

Though the primary goal of the I.D.W. is to offer a space for the free exchange of ideas and a return to ‘truth’, a fair portion of their following is brought together by their shared hatred of the left or their disdain for ‘multiculturalism’, which in turn tends to only increase pre-existing partisanship, rather than creating an independent, free-thinking third party.

Artwork by KEILANA HOFFSTETTER; Summer 2019 Society Library intern

What’s more, since the I.D.W. is focused on removing the barriers on what can and cannot be said, they often embrace individuals arguably known for their conspiratorial or downright false beliefs, like Alex Jones or Milo Yiannopoulos. Thus, to some their group is known primarily for inflammatory and controversial beliefs, and not necessarily their objectivity or truthfulness. Despite an initially admirable goal, this sort of isolated community succeeds only in sequestering themselves further from the rest of the public, creating the same issues and vulnerabilities as those existent in echo chambers.

Unfortunately, echo chambers have set the tone for online conversation. If two people with opposing views interact with one another, it’s usually not an emblem of productive debate — productive debate, in this sense, meaning a discussion which results in a greater understanding of the other side’s views or the discovery/creation of common ground. On the contrary, internet arguments tend to produce more animosity and partisanship, rather than camaraderie or understanding.

Since either party has probably been engaging primarily with users within their echo chambers, they will most likely have to begin their argument with the very base of their belief system and work their way up to whatever is actually called into question. This poses many issues, not least of which that a user with opposing views can attack the basis of the individual’s beliefs rather than the core issue being debated. Or, said individual may misunderstand which of their opponent’s views are most important or valuable, and therefore call into question aspects of their opponent’s argument that do little to further the discussion. Similarly, if one can’t come up with a rebuttal, they may resort to attacking the opposing individual personally or picking apart inconsequential aspects of their argument.

Worse still, with the amount of allegedly true (though often contradictory) information circulating online, someone may have sources for their views that others don’t trust. The distrust of media outlets and the anxiety over false news are not unfounded worries.

False news (a claim or claims that are fundamentally incorrect) travels faster and reaches more people than real news. Plus there really are domestic and international hackers and trolls who purposely sow incorrect or misrepresented information, and said hackers usually exist undercover within echo chambers. The existence of echo chambers also makes it easier for said hackers and trolls, some of which are puppets of government or corporate campaigns, to sow disinformation (intentional misinformation) in a way that speaks to a specific ideological demographic.

Among the various issues that echo chambers create in online communities, one of the more troubling is their susceptibility to targeted campaigns, like the recent Russian interference campaign (Internet Research Agency LLC).

The IRA was indicted by the Department of Justice for interference with various electoral campaigns from 2014–2016. By fabricating American identities or impersonating existing Americans, the IRA created accounts that blended into pre-existing echo chambers and political bases. Though the conservative IRA accounts tend to get more attention (since Trump won the election), it’s important to note that their campaign also targeted liberal communities.

These fake accounts sowed misinformation and organized opposing offline political marches, both in support of and against President Trump’s election to office. Their primary goal may not have been specifically to get Trump elected — though that has been suggested, especially given their posts encouraging minorities to vote third party — but rather to promote distrust of the American political system and existing candidates. The IRA stoked the partisanship that already existed in US society, encouraging greater polarization and creating more political discord.

The Internet Research Agency’s tactic was successful, in part, because of the pre-existing echo chambers of political communities. Online communities where individuals predictably share and support one another’s views are distinctly susceptible to targeted articles and posts that twist partially-correct information into partisan propaganda. This is partially because, while in these echo chambers, it becomes harder to spot such misinformation since its cloaked in the language typical of the community. In other words, it just seems like another like-minded user rather than a suspiciously biased, extreme viewpoint.

As extreme views are voiced within online communities, both those from authentic and fraudulent users, they become normalized. What would have at one point seemed extreme, now seems typical for those entrenched in the echo chamber. Just as we can become desensitized to violence or nudity, we can also become desensitized to extreme views when they reflect our own beliefs. The bias and tilt of posts/users begins to fade into the background until it seems like everyone is sharing obvious facts about US society, rather than a specific take on a nuanced issue.

This sort of distorting environment also makes it seem as though more people agree with extreme views than actually do. In turn, since echo chambers exist throughout the political spectrum, when a user who’s entrenched in their own community ventures out and engages with someone from a different community, it seems as if those two users are existing in entirely different places. Often each has a different set of facts they pull from, as well as extreme opinions they may assume are common sense, which could combine to create an angry, usually condescending debate.

Around half of the social media users contacted for a Pew Research study (49%, 53%, and 49% respectively) found that online debates tended to be angrier, less respectful, and less civil than those in real life. Similarly, 64% of users reported that their online interactions left them feeling like they had less in common with the opposite side of the political spectrum than they’d thought prior. Both users leave the debate feeling more sure of their own views, and more sure that the opposing ideology is absurd and harmful. To one another, each user seems out of touch with reality and the US public as a whole.

This may, in part, be due to the ‘back-fire effect’ which occurs when partisan individuals are exposed to opposing views. A 2018 study testing whether exposure to opposing political views/posts on Twitter would decrease polarization found that when conservatives followed liberal bots, they became substantially more conservative.

Though the study had various limitations (they did not include independents in their subject pool, the study only looked at Twitter, and the subjects were offered financial compensation to read view-challenging articles), it does suggest there is more to look into about exposing partisan people to opposing views. Namely, that simply reading view-opposing articles does not in and of itself decrease polarization, and in fact may backfire and increase partisanship — particularly among specific demographics.

The study notes that the increased conservatism of its Republican subjects may not specifically result from view-opposing exposure, but from other aspects of the study, like liberal bots retweeting more women and people of color than the conservatives were used to. The liberal and conservative bots also retweeted solely “high-profile elites,” and thus the increased conservatism may have resulted from an anti-elite bias.

Importantly, though, this study reveals the complex nature of exposing people to opposing political views. Specifically, that exposing people to highly partisan opposing views may only increase their partisanship and affinity for their own party, particularly among conservatives.

So if remaining in an echo chamber is problematic, and exposing oneself to opposing views may only create more partisanship, how can we bridge the political divide? Well, firstly, it’s important to note that exposure to different views does not always result in greater partisanship. In fact, to quote UPenn Sociologist Damon Centola, “It’s not that communication causes polarization. It’s that communication in a highly polarized context increases polarization.”

Right now, most social media outlets are already highly polarized. If faced with an opposing view that doesn’t fit within one’s pre-existing worldview, it’s easy to go to the opposing view-holder’s page, see their banner and bio (which typically reveal the political ideology of the user quite clearly), and discount their view on the basis of their political identity alone.

Though it’s possible that an individual may share a truly absurd view based on incorrect information, most of the time the posts made by opposing users are at least partially true, and only seem false to those who disagree because of the bias within them. If one has become accustomed to reading articles and posts with a bias that supports their own beliefs, they probably don’t perceive the bias — seeing it instead as truth.

However, when faced with an article with an opposing bias, the individual will probably immediately notice the bias, thus leading them to discount the article as a whole, rather than just the conclusion or opinionated aspect of it. Despite those articles not actually being false they may be labeled as “Fake News” since the reader perceives the article as heavily biased compared to view-affirming ones.

One meta-analysis examining various studies on partisan bias found that, when subjects are given identical information with differing conclusions, they tend to evaluate the information which affirms their own beliefs more favorably. The meta-analysis, which set out to find whether there was equal bias on either side of the political aisle, did indeed find that both liberals and conservatives have substantial bias when reviewing information — and said bias is about as strong regardless of political party.

This is not to say, however, that any article that posits something a user doesn’t agree with will be discounted or will encourage greater partisanship. In fact, people tend to spend more time reading articles that challenge their views than they do reading those which affirm them. This may be, in part, because people do want to remain aware of other political views, so long as their own are also supported through various other sources and articles.

However, when the language used to describe the opinion-challenging view is clearly political, a user may discount it completely. This pattern is shown well in the 2018 study “Social Learning and Partisan Bias in the Interpretation of Climate Trends.”

In the study, online users are separated into small networks and shown a climate change graph from NASA. The participants were asked to estimate what the next data point on the graph would be. After estimating the first time, the groups then estimate again under one of three conditions: they are shown the average of the other participants’ estimates in their network; they are shown the other estimates with the political identities, usernames, and ideology of the other participants revealed; or, lastly, they are shown the average estimate of the network while the political parties’ logos (an elephant and donkey) are shown below.

The study found that, for example, conservatives on their first guess were more likely to initially misinterpret the data, but if they were in the first group (where only the network’s average estimate is shown on screen), they would experience strong social learning effects, improving the accuracy of their estimates significantly and eliminating partisan bias in the data’s interpretation. Interestingly, when the usernames and political identities of the participants were revealed, there was still an increase in estimation accuracy, but less social learning. In the third case, where the political logos were shown, social learning was prevented entirely, and the baseline polarization of the group remained.

What does this study tell us? In part, it shows that we are capable of learning from one another and adjusting our views and expectations — that partisanship has not destroyed our ability to see reason.

On the other hand, this study reveals the insidious nature of political priming. Just by placing the political logos on the screen, participants were less likely to learn from one another or change their estimates. Essentially, when in a clearly politicized environment, we won’t listen to each other in favor of remaining in our political community, even if logically one side is more correct.

Beyond echo chambers in social media, the disconnect between ideological parties comes as well from the increasingly partisan mainstream media sources. A Pew Research study, albeit one from 2014, found that consistent liberals and consistent conservatives trust different sources for their news.

Consistent conservatives tend to trust Fox News above all other sources (about 47%), and, while consistent liberals do not have one source they trust significantly more than others, they tend to trust CNN, PBS, and BBC. Thus, when viewing mainstream news from a preferred source, individuals could be seeing the same kind of view-affirming material as they do on social media. Worse still, these news outlets are aware of their target audiences, and thus purposely angle their news to their demographic in order to remain relevant and favorable.

Importantly, this study also found that 39% of consistent conservatives and 30% of consistent liberals tend to drive political discussion — i.e. they discuss politics frequently and others come to them to hear about news/their views — while only 12% of those with mixed ideological views hold the same role. Meaning that the consistently partisan members of our society, who tend to view or read media from view-affirming sources, lead the political discussions we have — thereby sculpting our conversations to reflect already deeply partisan beliefs.

As noted in the 2018 study “Social Learning and Partisan Bias in the Interpretation of Climate Trends,” political priming can eliminate social learning and stop us from re-examining our assumptions and views. Thus, by having consistently partisan members of society lead our political discourse, we’re at risk for starting each conversation already in our respective corners, ready to defend and reinforce our political party rather than broach each topic with an open mind.

As political ideology becomes closer to a social identity, algorithmic-driven and self-imposed echo chambers keep us from seeing view-challenging content, and exposure to opposing views perpetuates existing partisanship, we’ve entered an age of intense polarization. It’s important to understand the factors working against us, and to be able to recognize them when applicable. If something seems suspiciously one-sided or too sensationalized to be true, it might be wrong. Check the source. See if any other outlets have released corroborating stories.

Recognize that most people aren’t jumping to absurd conclusions, but rather have views that reflect what content they’ve been consuming, what news they watch, and how active they are in political echo chambers. If you were seeing what they see, you may feel the same way.

Artwork by KEILANA HOFFSTETTER, Summer 2019 Society Library intern

As The Great Hack makes clear, demand privacy and security for your personal information. It’s time to start skimming the Terms and Conditions, most importantly any passage marked Privacy.

It’s entirely within our power to combat the partisanship in our society and utilize our combined knowledge and understanding to make the world a better place. Escaping from your echo chamber and exposing yourself to opposing views may not be easy, but by consciously consuming content and thinking critically about what you believe, we can be a better informed, more powerful public.

So how do we do that? It’ll be time-consuming to become more critical information consumers, but we are working to make this easier for the average American — at least on persistent issues of great impact.

We, the Society Library, work to archive as many narratives as possible, contextualize them in an apolitical way, and label them as they are: suggested fact, fallacy, opinion, etc. We utilize big data analysis, content analysis, and strategic intelligence gathering methods to extract arguments, claims, and evidence from over 12 forms of media (gov docs, videos, books, social media, etc.) to form those narratives and publish them for public scrutiny and use.

Rather than focus on changing what people think, we seek to change the context in which people think. Unlike an isolated echo chamber, the Society Library amasses a huge amount of content, subject by subject, and rigorously vets the sources of information, asking the questions: where did this narrative originate from, what does it mean, what are its components, etc.

Unlike a lot of other platforms, the Society Library is inclusive of all ideas, however radical or illogical, for the sake of uncensored representation. Rather than limit what can be said, the Society Library labels ideas as they are (logically fallacious, opinion-based, evidence-driven, etc.) and allows users to come to their own well-informed decision. Users need to see the whole picture, not just the facts, in order to understand complex topics for what they are.

This year, we’re focusing on creating a library of narratives concerning the American debate about climate change — which is a debate as scientific and technical as it is inflammatory and fallacious. There are many climate change echo chambers (one audience segmentation study indicates there are about six major echo chambers about ‘global warming’ in the United States), and our work is about excavating those chambers for gems and “fool’s gold” alike, and bringing those ideas together as one collection. Only then can we engage each chambers’ audience and invite them to see things from their fellow American’s point of view.

If you’d like to learn more about this non-profit work, we encourage you to do your research and check us out at SocietyLibrary.org and GreatAmericanDebate.org.

by Rowan Gledhill, a Society Library Media intern, Summer 2019

/ with few edits and additions by the Society Library staff.

Art by Keilana Hoffstetter (find her @keilana_art on Instagram)

--

--

The Society Library
The Society Library

Written by The Society Library

A non-profit library of society’s ideas, ideologies, and world-views. Focusing on improving the relationship between people and information.

No responses yet