Deepfakes are seemingly getting better by the day, so how do we detect what is real and what is fake?
Yu Chen, electrical and computer engineering professor at the Thomas J. Watson College of Engineering and Applied Science at Binghamton University, takes a look.
Yu Chen is a Professor of Electrical and Computer Engineering at the Binghamton University – State University of New York (SUNY). He received a Ph.D. in Electrical Engineering from the University of Southern California (USC) in 2006. Leading the Intelligent and Sustainable Edge Computing (I-SEC) Lab, his research focuses on Trust, Security, and Privacy in Computer Networks, including Edge-Fog-Cloud Computing, the Internet of Things (IoT), and their applications in smart and connected environments.
Dr. Chen’s publications include over 300 papers in scholarly journals, conference proceedings, and books. His research has been funded by NSF, DoD, AFOSR, AFRL, New York State, and industrial partners. He has been a reviewer for NSF panels, the DoE Independent Review Panel, international journals, and on the Technical Program Committee (TPC) of prestigious conferences. He is a Fellow of SPIE, a Senior Member of ACM and IEEE, and a member of SIGMA XI and AFCEA.
Exposing A.I. – Using Tools to Detect Fake Media
As artificial intelligence networks become more accessible, digitally manipulated deepfake photos and videos are becoming increasingly challenging to detect.
My team and I have broken down images using frequency domain analysis techniques to look for anomalies that could indicate they are generated or edited by AI.
The team created thousands of images with popular generative AI tools such as Adobe Firefly, PIXLR, DALL-E, and Google Deep Dream and analyzed them using signal processing techniques. By transforming the images into frequency components with mathematical functions, we can examine their frequency domain characteristics, thus making it easier to spot the difference between AI-generated and natural images.
To compare the authentic and AI-generated images, the team used a new machine learning tool called Generative Adversarial Networks Image Authentication. While common AI anomalies, such as an unusual number of fingers, can reveal whether an image is AI or not, our strategy has been more efficient at figuring this out.
In addition to AI-generated images, we have developed a technique to detect deepfakes in audio-video recordings. By leveraging electrical network frequency signals, we can verify whether the recording is authentic or has been tampered with.
This technique is necessary in today’s social media-dependent world, where AI and deepfakes have been given a bad reputation due to their misuse. Applying this new technique will allow researchers to show the public how AI could create innovation, thus preserving the integrity of media in an increasingly digital world.
Leave a Reply