Deepfake Detection Standardization
The emergence of Deepfakes has created a need for universally accepted ways to combat and neutralize them.
CAI outlines three important steps to be taken to mitigate deepfake threats:
- Detection.The first step refers to multiple deepfake detection methods proposed today. They should focus not only on altered digital media, be it visual or audio, but also help identify whether it was manipulated for malicious purposes.
- Education. Content creators — filmmakers, video game developers, vloggers — should understand that disinformation is dangerous. Therefore, creative tools that allow basically anyone to doctor digital media should be used responsibly. These ideas must be taught to regular viewers and listeners as well.
- Content attribution. The most important step is being able to trace the origins of the source media, according to CAI’s whitepaper. It implies that creators must be equipped with the simple-to-use tools to provide details on authorship. In turn, spotting manipulated media will become much easier. (This technique is also referred to as provenance check).
- According to the initiative, most media files circulate around the web without any metadata — like EXIF, XMP, VRA — whatsoever. This happens due to various reasons: authors seeking to conceal their identity, illegal copying of the source file, and so on.
Ream more on this Wiki - https://antispoofing.org/Deepfake_Detection_Standardization:_Origin,_Goals_and_Implementation.