Microsoft announced this Tuesday (1st) a new tool called Video Authenticator, capable of identifying manipulations in videos known as deepfake. The project analyzes each video frame and generates a manipulation score with a percentage indicating the chances that the media has changed.
The service is part of the Defense of Democracy Program and was developed by the R&D team of the Microsoft Foundation, which has partnerships with the AI Foundation and uses public data from Face Forensics++. The tool’s objective is to defend democracy from threats fueled by misinformation. The announcement was made just before the US presidential election, which takes place on November 3, but the intention is to feed it for long-term use.
The Video Authenticator will be able to display a real-time confidence percentage on each frame of a video, detecting subtle editing elements such as color fading and grayscale “that cannot be detected by the human eye”.
Microsoft’s Video Authenticator deepfake detection tool.Source: Microsoft/Disclosure
Microsoft knows that deepfake creation methods advance in sophistication and that detection forms still have failure rates, so it hopes to fuel its technology and in the long run “look for stronger methods to maintain and certify the authenticity” of online publications.
“There are few tools today to help assure readers that the media they are viewing has come from a credible source and has not been altered,” the company said in a statement. One of the new technologies is a browser extension that checks certificates and combines the hashes to tell the reader whether the content is authentic or has been altered.
Another system should allow content producers to add hashes and digital certificates to media, functioning as a digital watermark located in the metadata.