From ‘Low Rate of Accuracy’ to 99% Success: Can OpenAI’s New Tool Detect Deepfakes?

The post From ‘Low Rate of Accuracy’ to 99% Success: Can OpenAI’s New Tool Detect Deepfakes? appeared on BitcoinEthereumNews.com. OpenAI, a pioneer in the field of generative AI, is stepping up to the challenge of detecting deepfake imagery amid a rising prevalence of misleading content spreading on social media. At the recent Wall Street Journal’s Tech Live conference in Laguna Beach, California, the company’s chief technology officer, Mira Murati, unveiled a new deepfake detector. Murati said OpenAI’s new tool boasts “99% reliability” in determining if a picture was produced using AI. AI-generated images can include everything from light-hearted creations like Pope Francis sporting a puffy Balenciaga coat to deceptive images that can cause financial havoc. The potential and pitfalls of AI are evident. As these tools become more sophisticated, distinguishing between what’s real and what’s AI-generated is proving to be a challenge. While the tool’s release date remains under wraps, its announcement has stirred significant interest, especially in light of OpenAI’s past endeavors. In January 2022, the company unveiled a text classifier that purportedly distinguished human writing from machine-generated text from models like ChatGPT. But by July, OpenAI quietly shut down the tool, posting an update that it had an unacceptably high error rate. Their classifier incorrectly labeled genuine human writing as AI-generated 9% of the time. If Murati’s claim is true, this would be a significant moment for the industry, ascurrent methods of detecting AI-generated images are not typically automated. Usually, enthusiasts rely on gut feeling and focus on well-known challenges that stymie generative AI like depicting hands, teeth, and patterns. The difference between AI-generated images and AI-edited images remains blurry, especially if one tries to use AI to detect AI. OpenAI is not only working on detecting harmful AI images, it is also setting guardrails to censor its own model even beyond what is publicly stated in its content guidelines. As Decrypt found, the Dall-E tool from…

Oct 21, 2023 - 05:00
 0  13
From ‘Low Rate of Accuracy’ to 99% Success: Can OpenAI’s New Tool Detect Deepfakes?

The post From ‘Low Rate of Accuracy’ to 99% Success: Can OpenAI’s New Tool Detect Deepfakes? appeared on BitcoinEthereumNews.com.

OpenAI, a pioneer in the field of generative AI, is stepping up to the challenge of detecting deepfake imagery amid a rising prevalence of misleading content spreading on social media. At the recent Wall Street Journal’s Tech Live conference in Laguna Beach, California, the company’s chief technology officer, Mira Murati, unveiled a new deepfake detector. Murati said OpenAI’s new tool boasts “99% reliability” in determining if a picture was produced using AI. AI-generated images can include everything from light-hearted creations like Pope Francis sporting a puffy Balenciaga coat to deceptive images that can cause financial havoc. The potential and pitfalls of AI are evident. As these tools become more sophisticated, distinguishing between what’s real and what’s AI-generated is proving to be a challenge. While the tool’s release date remains under wraps, its announcement has stirred significant interest, especially in light of OpenAI’s past endeavors. In January 2022, the company unveiled a text classifier that purportedly distinguished human writing from machine-generated text from models like ChatGPT. But by July, OpenAI quietly shut down the tool, posting an update that it had an unacceptably high error rate. Their classifier incorrectly labeled genuine human writing as AI-generated 9% of the time. If Murati’s claim is true, this would be a significant moment for the industry, ascurrent methods of detecting AI-generated images are not typically automated. Usually, enthusiasts rely on gut feeling and focus on well-known challenges that stymie generative AI like depicting hands, teeth, and patterns. The difference between AI-generated images and AI-edited images remains blurry, especially if one tries to use AI to detect AI. OpenAI is not only working on detecting harmful AI images, it is also setting guardrails to censor its own model even beyond what is publicly stated in its content guidelines. As Decrypt found, the Dall-E tool from…

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow