In a separate video of five wolf pups chasing one another, the animated animals appear to blend into one another as the AI seems unable to distinguish one figure from another. A similar issue is seen in a video of a basketball, which appears to pass through the netting of a hoop.

Pay attention to strange flickering and body movements, said Mr Lee Joon Sern, senior director of machine learning and cloud research at Ensign InfoSecurity’s Ensign Labs.

Significant inconsistencies served as clear indicators of deepfakes, as observed in the case of political figures in Singapore amid a series of deepfake incidents that emerged in late 2023.

Playing catch-up

Researchers globally are working on tools to counter the risks of AI-made content, including the use of software to analyse videos for signs of AI, such as looking at patterns in a video’s audio track and cross-referencing against other sources for discrepancies, Mr Lee said.

Others are looking into metadata analysis, unpacking a digital file’s data to trace its origin and authenticity, he added.

Regulation and penalties for using AI for fraud are key steps to rein in the technology, said Mr Chris Boyd, staff research engineer from cyber-security firm Tenable.

The European Union has passed the AI Act, which will require creators to label AI-generated content with digital watermarks.



Source link

By admin

Leave a Reply

Your email address will not be published. Required fields are marked *