This captured image, provided by KAIST on Tuesday, shows the interface of its deepfake detection mobile app KaiCatch. (KAIST)
The Korea Advanced Institute of Science and Technology (KAIST) said Tuesday its research team launched the country's first mobile app that detects deepfakes -- images or videos digitally manipulated with artificial intelligence (AI) -- to curb misinformation and prevent potential harm to victims targeted by the technology.
The software, named KaiCatch, can accurately detect deepfakes by using AI technology that recognizes abnormal distortions in a subject's face in images, according to KAIST, South Korea's top science and technology university.
App users can upload images or frames of videos on the app, which calculates the likelihood of the image being manipulated for 2,000 won ($1.76) per image, according to KAIST's Lee Heung-kyu, a professor at the school of computing, who is behind KaiCatch.
Lee said he has developed image manipulation detection software since 2015, collecting a mass database of images and video data.
The researcher expects KaiCatch to help the broader public detect deepfakes, which have become a major concern in South Korea as they have been used to create porn involving female celebrities.
A petition on the presidential office's website early this year, which called for strong punishment against deepfake porn users, earned over 390,000 signatures.
"This is just the starting point for KaiCatch," Lee said. "We plan to keep updating the software to detect new deepfake technology."
KaiCatch is currently only available on the Android operating system in Korean, but Lee said his team is planning to release an iOS version for Apple users and support other languages, including English, Chinese and Japanese. (Yonhap)