GROWING USE of artificial intelligence (AI) tools in fraud schemes such as deepfakes is one of the top cybersecurity concerns for 2025, according to a cybersecurity company.
“As quantum attacks loom and deepfakes become mainstream tools of deception, businesses will either innovate or risk being outpaced by adversaries,” Palo Alto Networks Asia-Pacific and Japan President Simon Green said in a statement this month.
Citing a recent PricewaterhouseCoopers International Limited report, Palo Alto Networks said more than 40% of leaders do not understand the risks that could come with emerging technologies like virtual environment tools, generative AI, enterprise blockchain, quantum computing, virtual reality, and augmented reality.
In the Asia-Pacific region, deepfakes are already being used for targeted attacks, the company said.
“Savvy criminals will take note and use ever-improving generative AI technology to launch credible deepfake attacks.”
Palo Alto Networks said that audio deepfakes or “highly credible” voice cloning will become more prevalent in cyberattacks next year.
“We can expect deepfakes to be used alone or as part of a larger attack much more often in 2025,” it said.
Deepfakes combine pictures and voices of real people to create manipulated images, videos, and audio, according to the Cybercrime Investigation and Coordinating Center (CICC).
CICC Executive Director Undersecretary Alexander K. Ramos said in a statement in September that the manipulative tool is also one of the biggest threats in the upcoming 2025 midterm elections.
“It’s a tool that can mislead the public because of the content. We wouldn’t know which is real or not,” he said.
Mr. Ramos added that the government is currently looking for ways to counter the harmful effects of the technology, and CICC is working with the Commission on Elections to maintain the integrity of our elections.
“We have continuous talks because we have to research technologies that could help them govern this coming election.” — Almira Louise S. Martinez