Microsoft has announced tools to help the news identify Deepfax before the election

As the United States moves toward a presidential election at a time when it is harder than ever to eliminate fake news online, Microsoft is releasing a new tool aimed at better identifying AI-produced DipFax.

This software, a product of a partnership between the tech giant and the hybrid nonprofit-commercial enterprise AI Foundation, can determine the likelihood of being created or manipulated using a face AI in a still image or video. It is currently only available to select organizations “involved in the democratic process”, including news agencies and political campaigns.

Microsoft is partnering with media organizations such as The New York Times, the BBC and CBC / Radio-Canada in a broader effort to test the technology and build a set of standards for media authentication in the digital age. The company is also releasing a service that will allow content creators to prove the authenticity of their published videos.

The DeepFake detection tool, known as Microsoft Video Authenticator, works by analyzing images for subtle distortions or irregularities that can be impenetrable to casual observers. It draws on a system created by the AI ​​Foundation called Reality Defender, which the company already supplies to media outlets as part of its core business.

“Separating online from fiction represents one of the biggest challenges for democracy,” said Tom Burt, corporate vice president of corporate security and trust at Microsoft. “The reality is that Defender has the ability to help campaigners, newsrooms and others use a variety of technologies quickly and responsibly to determine the truth. We believe this partnership can provide organizations with the basic tools for free and fair selection.”

Researchers have recently ranked Dipfex as the number one crime and terrorism threat presented by AI-related technology due to the potential for blackmail or extortion as well as spreading dangerous misinformation. In recent months manually manipulated media has become a flashpoint of choice, with a recent study finding that the initial negligent use of deffex, created specifically through machine learning, was non-consensual pornography.

Leave a Comment