It used to be easy to tell the difference, but not anymore. Social platforms now give billions of people the ability to express their opinions and share them with family and friends, who, in turn, can distribute them to a much larger audience.
These platform companies, which must deal with a growing deluge of user-generated content, have to make Solomon-like decisions about the veracity of this information. If they publish content that is clearly untrue, consumer backlash would likely be swift, threatening revenue and reputation.
On the other hand, if platforms refuse to publish certain content, they may be accused of censorship, bias or having a political agenda. Adding to the problem are algorithms, which can produce filter bubbles that reinforce users’ existing beliefs, rather than showing them a variety of viewpoints.
This is often a lose-lose situation for digital platforms, and a number of executives in the tech industry warn that things could get worse as consumer skepticism and mistrust escalate. So the pressure is mounting on US companies to increase content moderation, especially given the combined effects of ongoing COVID-19 disinformation campaigns, continuing social unrest and the recent US presidential election.
The election and resulting turmoil resulted in risk management implications for technology, media and telecom (TMT) companies. Given today’s volatile environment, it’s critical to find ways to rebuild trust in content platforms.
Content moderation is not new: Publishers have been monitoring comments on their sites for decades. But today’s content is far more abundant, diverse and divisive than ever before. The challenge is defining policies on when and how to delete or label objectionable content, without trampling users’ expectations of being free to engage in any dialogue they choose.
This also places platforms under pressure to handle reader complaints equitably, remove bias in algorithms and promote transparency in processes and decisions. Some companies are finding ways to rise to this admittedly daunting challenge.
For example, oversight bodies can bring meaningful transparency to content moderation—a step that consumers and governments are demanding. In fact, more than eight in ten Americans think a content oversight board is a “good or very good idea,” according to research from the Knight Foundation and Gallup.
After Trump supporters stormed the US Capitol on January 6, a variety of social media companies either suspended or closed Trump's accounts over concerns about ongoing potential for violence. Further, Amazon removed Parler, a popular right-wing social media site, from its hosting services, and Apple and Google removed Parler from their app stores.
Meanwhile, the European Union’s revised Audiovisual Media Services Directive (AVMSD) governs the coordination of national legislation on all audiovisual media, including TV broadcasts and on-demand services.
Ireland’s government recently approved the General Scheme of the Online Safety and Media Regulation bill—to handle complaints from users—as well as the detailed drafting of the bill by the Office of the Attorney General. The legislation, which includes safety codes that outline how online video-sharing services will deal with harmful content, will enable the Online Safety Commissioner to regulate online content and apply sanctions for non-compliance.
Today’s content is far more abundant, diverse and divisive than ever before. This places platforms under pressure to handle reader complaints equitably, remove bias in algorithms and promote transparency into processes and decisions.
Content moderation requires an effective governance and control framework. Determine whether your company has such a framework by asking the following questions:
Some US legislators have called for industry standards to reduce the spread of misinformation, disinformation and synthetic content such as deep fakes. And the US Congress is considering making changes to Section 230 legislation, which shields internet companies from liability for third-party content. One provision says these platforms are not liable if they make good-faith efforts to moderate objectionable content. In October 2020, the Federal Communications Commission announced that more clarity on Section 230 would be forthcoming.
Meanwhile, lawmakers continue to hold hearings in an effort to better understand the situation before making changes. They are divided about whether Section 230 allows tech companies to avoid doing enough to moderate content or requires them to do too much moderation. In response, the platforms have pointed out that it would be almost impossible for them to operate if they could be sued for either posting or deleting too much content. Social media coverage of the recent assault on the US Capitol highlights the precariousness of this balancing act.
Despite these challenges, digital platforms have a chance to bring together consumers and industry and government agencies to develop regulations that would benefit all stakeholders. By taking such a proactive approach, companies could create positive change—something that’s desperately needed in today’s unsettled environment.
The financial sector is a prime example of an industry in which different groups have worked together to initiate needed changes. Major card issuers and banks formed the Payment Card Industry (PCI), which created the Data Security Standard (PCI DSS) that enables uniform global standards. These protocols allow consumers to make secure, seamless electronic payments anywhere in the world. And EMVCo—developed jointly by Europay, MasterCard and Visa—lets the payments industry enforce global acceptance and interoperability.
As the volume of user-generated content continues to skyrocket past the scope of human content moderators, many companies are turning to AI-based technologies for help. For example, algorithms that use machine learning can decipher what content is most likely to engage a specific user and then serve that person content in line with those preferences. But this can lead to so-called filter, or content, bubbles, which can exacerbate polarization and divisiveness.
In contrast, other applications of AI can support transparency and build trust—especially when humans are in the loop. But algorithms have to be designed, built, implemented, managed and either updated or retired to ensure top performance and accuracy.
AI models also have to be continually monitored and regularly updated, as they can degrade after a few months—particularly in sectors that are constantly evolving, like the news. As a result, building trust in content moderation systems requires building trust in the AI that supports them.
Tools like PwC’s AI Risk Confidence Framework—which provides guidance and controls over the AI lifecycle from end to end—supports active, always-on monitoring of AI.
AI models also have to be continually monitored and regularly updated, as they can degrade after a few months—particularly in situations such as news, which is constantly changing. As a result, building trust in content moderation systems requires building trust in the AI that supports them.
As the adoption of AI for multiple uses—including content moderation—continues to accelerate, companies should ask some key questions to help them identify problem areas and achieve confidence in their AI systems. These include:
Ultimately, effective content moderation and process transparency are essential to help avoid overregulation and consumer backlash, as well as potential reputational damage and revenue loss. Though some platforms are addressing content moderation challenges, others are working out where and how to start. Follow these steps to get on the path to effective content moderation:
Being a moderator of facts presents challenges; digital platforms need to make a good-faith effort to keep their sites free from misinformation, disinformation, hate speech, fake news and fabricated content. But they don’t have to go it alone. Now is the time for companies to work together, using all the tools available to build—and maintain—trust.