Connecting the Dots

Supporting Truth and Trust in Digital Content During the Deepfake Era

Software-enabled innovations in artificial intelligence, machine learning, and automation tools are on the rise. These promising technologies are being implemented across industry sectors to allow content creators to produce and share their work faster than ever, allow greater insights into data for businesses and governments, and help people work more efficiently.

While the future of AI is promising, some bad actors take advantage of this technology to sow disinformation and distribute inauthentic visual content—also referred to as “deepfakes”— across the internet.

Legislators on Capitol Hill, software industry leaders, and other stakeholders are working to address the issue of inauthentic content. For example, in October of 2019, Adobe, The New York Times Company and Twitter, launched the Content Authenticity Initiative (CAI) with the goal of developing an industry standard for content attribution. The CAI’s momentum continues to grow. The founding partners, working with the BBC and The Canadian Broadcasting Corporation, among others, will publish a whitepaper this summer that lays out the overarching goals and technical design for an industry standard. A summary of the key white paper concepts can be found on the CAI website.

Software.org recently convened a live virtual event with congressional leaders, industry, and media experts to examine this topic. While the conversation covered a wide range of issues, a few key themes emerged across the board:

Misinformation erodes truth

Opening the dialogue, Sens. Rob Portman (R-OH) and Catherine Cortez Masto (D-NV) highlighted the federal government’s role in creating forward-thinking solutions to preserve truth and confidence in our democratic institutions. They emphasized that truth matters in democracy and action is needed to protect the public from misuse of an otherwise encouraging technology.

This sentiment was echoed by all speakers. Nina Jankowicz, Disinformation Fellow at The Wilson Center’s Science and Technology Innovation Program, traced the modern rise of deepfakes and online misinformation to the erosion of public trust and the ability of bad actors to exploit societal divisions through visceral images.

Marc Lavallee, Executive Director of Research and Development at The New York Times, added that even “real media,” not necessarily just tampered images, can be used to frame an issue deceitfully or in misleading contexts. Even those with good intentions can unknowingly disseminate misleading content on social media.

A holistic approach requires action on detection, education and content attribution

Several legislative actions are being taken. In his opening remarks, Portman explained the need for his legislation—the Deepfake Report Act—which would require the Department of Homeland Security to conduct an annual report on deepfakes. This legislation (S.2065) was the first Deepfake bill to pass the Senate and was recently introduced as an amendment to the FY21 National Defense Authorization Act (NDAA). Cortez Masto added that her bill, S. 2904 the IOGAN Act, would direct the National Science Foundation and National Institute for Standards and Technology to support ongoing research efforts with the private sector to improve detection of deepfakes and explore best practices for educating the public on discerning the authenticity of digital content.

Sam Mulopulos, Director of the U.S. Senate AI Caucus and Policy Advisor to  Portman, explained that in addition to a tactical element to combat disinformation through innovative technological solutions, there must also be a strategic element to restore a civic culture in the United States that values truth and trust.

David Dorfman, Legislative Director and General Counsel to Rep. Yvette D. Clarke (D-NY), described the challenge of mitigating the negative effects of deepfake technology and shared how Clarke’s legislation, the DEEPFAKES Accountability Act, addresses both the domestic and national security ramifications of this evolving threat vector. He spoke to efforts like the CAI being absolutely necessary in bridging the gap and empowering good actors to foster trust in the content they share.

Sharing the industry perspective, Dana Rao, Executive Vice President and General Counsel at Adobe, echoed the importance of detection, education, and attribution as components of a holistic approach. Detection is ideal but also the hardest to accomplish, similar to challenges with cyber security. He thanked Senators Portman and Cortez Masto for their leadership on detection research and called on government to drive education and awareness efforts, so the public is armed with the information they need to decide what to trust online.

Jankowicz added that deepfake moderation may be impossible to achieve at scale, and so societies instead need to invest in robust citizen education, repair vulnerable fissures, and equip users with the tools, such as those for content attribution, to assess the credibility of information they consume.

An industry standard for content attribution lets the user decide what is true

Rao shared that CAI is focused on an industry standard for content attribution that leaves value judgements to the user. With this approach, creators opt into an open, transparent content attribution standard that would disclose key metadata—including who changed a media asset, what was changed, and how it was changed— and provide a mechanism for viewers to validate the content they see online and understand the context of edits. “Good actors” will be able to receive credit for their work and “bad actors” will choose to not opt in. Over time, consumers will expect a way to validate the information that they see and be skeptical if there is no attribution.

Across the board, public sector and industry speakers were positive and optimistic about the work being done on CAI. Mulopulos spoke about Congress’ role as a fact finder and the importance of legislative bodies like the Senate AI Caucus to inform policymakers on emerging technologies like CAI before legislating on this topic. Lavallee shared why The New York Times is working on initiatives like CAI. He stressed that providing users and journalists the right underlying metadata—from photo capture to final posting—will help them better understand the origin of an image or video before sharing.

Misinformation is longstanding problem; it requires mutual effort

While much work remains to be done, it was clear across the board that policymakers, technology companies, content creators, and educators must work in tandem to mitigate the effects of disinformation and pioneer a way forward that helps everyday users discern genuine content from malicious deepfakes.

You can watch the full event here. To join us for future virtual events on this topic and other emerging technology issues, follow Software.org on social media and check out https://software.org/events/ for the latest details.

Recent Posts

Chris HopfenspergerChris Hopfensperger
Executive Director, Software.org

As the founding executive director of Software.org, Chris Hopfensperger leads the foundation’s efforts to help policymakers and the general public better understand the impact that software has on our lives, our economy, and our society. He also helps translate the foundation’s philanthropic and forward-looking agenda into efforts to address key issues facing the software industry.

More about Chris Hopfensperger

Subscribe

Subscribe to receive updates about our latest news and research.