Artificial intelligence is giving rise to a new generation of digital scams that are increasingly difficult to detect.According to a recent report by cybersecurity company Kaspersky, strong growth in AI-powered fraud is expected in 2026, including the creation of fake event websites that closely imitate legitimate ticketing platforms and can cause significant financial losses for users.
Experts warn that these scams take advantage of the advancement of AI tools capable of generating realistic text, images and web designs in a matter of minutes.The result is pages almost identical to the originals, with logos, event calendars, false payment gateways and urgent messages that induce people to buy non-existent tickets.
According to Kaspersky, artificial intelligence has become a double-edged sword.While it helps optimize processes in sectors such as entertainment, it also makes the work of cybercriminals easier, who can now escalate scams with less effort and greater reach.
One of the most worrying scenarios is that of massive events.The high demand for concerts, festivals or sports competitions creates the ideal context for quick deceptions.With generative AI, fraudsters can clone official sites, adapt prices in real time, and launch fraudulent campaigns on social media or targeted emails.
“When AI becomes critical infrastructure, any failure, abuse or malicious use can have serious consequences,” the cybersecurity company warns in its report.
AI-based scams are not limited to a single technique.According to specialists, they usually combine several elements:
In many cases, the victim does not realize the scam until they try to access the event or check their bank account.
Kaspersky’s report is not limited to fake entries.The company identified five major risks associated with the accelerated use of artificial intelligence in the entertainment and digital services sector:
Unlike traditional scams, sites created with AI do not have obvious design or writing errors.The texts are coherent, the images are credible, and the user experience is similar to that of legitimate platforms.This reduces common red flags and increases the success rate of deceptions.
Additionally, AI’s ability to personalize messages allows for more targeted attacks, tailored to victims’ browsing history or interests.
Against this backdrop, experts recommend treating both AI systems and the data that feed them as a critical part of the attack surface.Among the main suggested measures are:
Artificial intelligence will continue to transform the digital experience, but it will also raise the level of sophistication of scams.In 2026, distinguishing what is real from what is fake will be more difficult than ever, and prevention is emerging as the main defense to avoid losing money in the face of increasingly credible hoaxes.

