The rise of deepfakes is set to be challenged with the publication of a new way of identifying authentic content published this week.
The new standard, known as JPEG Trust, will enable photos and videos to be ‘tagged’ and authenticated, so that users and creators can build trust in media content.
Deepfakes are media that have been digitally manipulated, often using artificial intelligence, to create a likeness of a person that is convincingly realistic.
Imitating identities to commit fraud and concocting fake videos to spread ‘false news’ are just some examples of how deepfakes are used maliciously, with potentially very damaging consequences.
Following a year when there were 50 national elections worldwide, there were general concerns that deepfakes would have been used to disseminate misinformation and sway voters, causing significant harm. While this didn't entirely eventuate, there is concern that the increasing use of deep fakes is distorting our relationship with reality, reinforcing biases and showing people the truth they want to believe.
While many experts and organizations are attempting to identify and prevent deepfakes, the technology is becoming so sophisticated that soon even experts trained to detect them will soon fail.
JPEG Trust, also known as ISO/IEC 21617, was developed by the joint IEC, ISO and ITU Joint Photographic Experts Group (JPEG) of ISO/IEC JTC 1’s subcommittee SC 29, Coding of audio, picture, multimedia and hypermedia information. This is the same committee behind JPEG technology that has enabled the world to use and share billions of images each day for over 30 years.
JPEG convenor, Professor Touradj Ebrahimi, said that as technologies evolve to detect deepfakes, so, too do those to create them.
“It is like two uses of AI that compete, one to generate deepfakes and another to detect them. It’s a race, a race for the good. But we risk losing it unless other solutions are found.”
A new, revolutionary solution is JPEG Trust. Aimed at increasing trust in shared media through identification and authentication, JPEG Trust is a standard that can enable a secure and reliable way of tagging media and any modifications. It features ‘trust indicators’ around authenticity, provenance and integrity, meaning users of content can see where it has come from and if it has been tampered with or not.
The first version of the standard, Core Foundations, has just been published and it is expected to evolve over time as technologies evolve too.
What is powerful about JPEG Trust, said Prof Ebrahimi, is that the standard is not governed by one or two organizations but by a diverse range of stakeholders from all over the world under strict rules and procedures of consensus and collaboration.
This ensures interoperability and consistency everywhere. An example of the value of this approach, and international standardization in general, is with JPEG itself, he said.
“The JPEG is over 30 years old, yet the many devices and applications where images are used can still decode an image that was created 30 years ago. That is thanks to the internationally agreed standards that govern it. It will be the same with JPEG Trust.”
The standard addresses not just technical requirements but current and future regulatory requirements, ethical and privacy issues as well as social tendencies and consumer behaviour so that it is relevant and widely adopted.
It is also prescriptive enough to be effective but also broad enough to be used across a wide range of technologies and jurisdictions. It is hoped that, in an era where governments everywhere are trying to regulate AI, JPEG Trust will be part of the equation.
“AI can and does bring so many benefits to society and to content creators, but only if we can trust what is produced. JPEG Trust is a means to achieving that.”
JPEG Trust was a focus of the recent AI for Good summit, in Geneva. The discussion led to the announcement of a wider World Standards Cooperation project to address the challenges of deepfakes and generative AI.
Building trust in AI and ensuring its responsible deployment is a key element of IEC’s vision and mission. Learn more about the role of IEC standards in detecting deepfakes as well as other ways IEC is contributing to keeping AI safe and trustworthy.