Combatting Misinformation through Emerging Tech

Hannah Towey
4 min readMar 22, 2021

In 2016, the oxford dictionary word of the year was “post-truth,” referencing the phenomena in which “objective facts are less influential in shaping public opinion than appeals to emotion and personal belief.” Over the next four years, the term “fake news” would skyrocket in political speech and media use, with a quick Google News search of the term resulting in about 8 million results. Personally, I’m not a fan of the term. Why? Because it implies a sense of obviousness. Fake or real, fiction or fact — the line separating truth from misinformation is rarely so clear-cut.

Right around the midterm elections, The Knight Foundation commissioned a report investigating seven different ways misinformation spread during the 2016 presidential election, shedding light on the ecosystem within which “fake news” is created and shared. The fundamental question guiding this study is crucial in understanding how emerging technologies can combat misinformation:

“We know that the problem is the product of both technology and human behavior. It is people who generate efforts to dress up inaccurate and misleading information as creditable “news.” But it is technology — that weaponizes at a larger scale these malevolent (or misguided) impulses. So, if this is a human problem and a technological one, what about the solution? Can technology address our misinformation problem, or does it come down to people and what they do or don’t do?”

I, unfortunately, do not have an answer to the above questions. But many tech and media companies have started to tackles solutions while, so far, the government has not. To better understand possible solutions to the problem of misinformation, I explored two categories of emerging tech initiatives that attempt the make the internet a more trustworthy place.

The first focuses on visual misinformation. Think: manipulated maps and images that went viral during last year’s devasting Australian wildfires. The New York Times, IBM, Adobe and Twitter have all partnered up on two projects that have prototyped a “new digital standard” for visual media like photos and videos. The proposed product allows key information, such as who took a photo and when and where it was taken, to travel within the media itself. That way, viewers can see if and how a photo or video was altered or used in an inaccurate context.

The two technical capabilities that this project utilizes, blockchain and metadata, aren’t entirely new. However, they are being used in ways that haven’t been done before. According to techterms, a blockchain is a digital record of transactions. The name comes from its structure, in which individual records, called blocks, are linked together in a single list, called a chain. Most people know this technology through its use with cryptocurrency like bitcoin. Metadata is a broader concept, literally meaning “data about data.” It usually answers the six W’s: who, what, where, when, how, why.

The Knight Foundation study I mentioned earlier found several key findings specifically about how misinformation spreads on Twitter. One important takeaway: anywhere from a third to two-thirds of Twitter accounts spreading misinformation are automated accounts or “bots.”

The Pew Research estimates that 66 percent of all tweets containing links are tweeted or retweeted by bots. In a recent study, The University of Southern California and Indiana University found nearly 48 million Twitter bots active today.

Two of the most popular ‘bot detectors” out there now are Bot sentinel, which is a chrome extension and the Botometer, which is software. Both use machine learning and artificial intelligence to determine whether or not a Twitter account is likely to be a bot or a human being.

This use of emerging technology is incredibly relevant to the conversation around combatting misinformation through tech, as it’s a perfect example of similar“types” of technology being used in polar opposite ways.

Both bots, and bot detectors, utilize some form of machine learning and artificial intelligence. The difference? The intent of the human who engineered them.

To review, artificial intelligence is an umbrella term encompassing a field of engineering that attempts to make computers perform what we believe to be uniquely “human” actions. In comparison, Machine learning is but one branch of artificial intelligence, representing just one way to achieve that overall goal.

Individually, these initiatives may seem small. But they each contribute to a larger movement by tech and media companies to regulate the internet. In recent news, Facebook publicly announced its support for “regulators to create legal standards for content moderation.” Some argue that Zuckerburg’s stance is simply self-serving, as the regulation approach they support is basically a set of guidelines Facebook already enforces on their platform. Regardless, one of the largest tech companies in the world has vocalized a need for rules that would likely limit their own business.

To circle back to where we started, misinformation is not solely a technological problem, but it’s also not just a human problem. Who do you think should regulate the internet? And with growing collaboration between legacy media companies and big tech, what should the future of that relationship look like? Comment below (no bots please)!

--

--