WE POST ONE NEW BILLION-DOLLAR STARTUP IDEA every day.

Problem: There is far too much disinformation online.

Solution: One of the great ways to solve problems of uncertainty for machines is to use the intelligence of “the crowd” and human masses. This business would use the masses to solve the problem of disinformation online by creating a CAPTCHA-like system that breaks down large articles into its constituent sentences and subsequently asks the crowd to verify the truth or falsity of the sentences. The hope is that, at scale, this business would help to verify the accuracy of articles in the world.

But, there is a valid question: would this sort of business even work at all? As described by the PNAS (Proceedings of the National Academy of Sciences of the United States of America)

Reducing the spread of misinformation, especially on social media, is a major challenge. We investigate one potential approach: having social media platform algorithms preferentially display content from news sources that users rate as trustworthy. To do so, we ask whether crowdsourced trust ratings can effectively differentiate more versus less reliable sources. We ran two preregistered experiments (n = 1,010 from Mechanical Turk and n = 970 from Lucid) where individuals rated familiarity with, and trust in, 60 news sources from three categories: (i) mainstream media outlets, (ii) hyperpartisan websites, and (iii) websites that produce blatantly false content (“fake news”). Despite substantial partisan differences, we find that laypeople across the political spectrum rated mainstream sources as far more trustworthy than either hyperpartisan or fake news sources. Although this difference was larger for Democrats than Republicans—mostly due to distrust of mainstream sources by Republicans—every mainstream source (with one exception) was rated as more trustworthy than every hyperpartisan or fake news source across both studies when equally weighting ratings of Democrats and Republicans. Furthermore, politically balanced layperson ratings were strongly correlated (r = 0.90) with ratings provided by professional fact-checkers. We also found that, particularly among liberals, individuals higher in cognitive reflection were better able to discern between low- and high-quality sources. Finally, we found that excluding ratings from participants who were not familiar with a given news source dramatically reduced the effectiveness of the crowd. Our findings indicate that having algorithms up-rank content from trusted media outlets may be a promising approach for fighting the spread of misinformation on social media.

In short, this study by Gordon Pennycook and David G. Rand seem to argue that misinformation can be solved by the crowd. The Conversation even wrote an article in November 2017 describing how they had created an algorithm to do this.

On our prototype website, users can create new inquiries or engage in ongoing ones. To fact-check a statement, for example, a user might post an assertion someone made in the news or on social media. Other users could then contribute related stances, claims, evidence and sources. Each addition helps flesh out the formal argument structure with additional information.

As an argument is constructed, users can examine the information presented and evaluate it for themselves. They can rate particular contributions as helpful (or unhelpful), and even rate other users themselves as reliable or less so. They can also filter arguments based upon the properties of a source, evidence, user or claim, including the emotional content. In this way, users can examine a preexisting argument and decide for themselves which stance is supported for a given set of criteria.

The problem with both of these methods and approaches is that they require end users to seek out and subsequently participate in the platforms that would combat fake news. By using a “CAPTCHA-model,” this business instead would bring the fake news snippets for review to end users.

As described by ZDNet, “an economic study by Tel Aviv, Israel-based cybersecurity firm CHEQ and the University of Baltimore have revealed that fake news is costing the global economy $78 billion each year. It recently commissioned a report detailing the scale of the global rise of online fake news and how much this impacts us financially. It used the latest economic analysis, proprietary CHEQ data, and expert interviews for its report. The report analyzes the direct economic cost inflicted by fake news, alongside the growing price paid by businesses and governments to counter misinformation.”

While some are skeptical that crowdsourcing will be the ultimate solution, given the size of the problem, I argue that this could be one of many solutions in the market. Ideally this business would be able to tackle a large and growing industry to become a unicorn company that preserves the validity of news.

Monetization: Ultimately, this business would provide News-Verification-as-a-service at a free to any newsfeed company or content provider.

Contributed by: Michael Bervell (Billion Dollar Startup Ideas)

Read Our First 500 Billion Dollar Ideas
$5.00
Every month

Subscribe here to get access to the first 500 ideas from our blog. For just one coffee a month, you can have access to more than $500 billion dollars of ideas. What's not to love?

Purchase

New Mall

Developing Farm Mechanization