Everyone is talking about generative AI – including cyber criminals in the crypto space. An analyst’s report shows how quickly fraudsters are able to use the new technology for their own benefit. Two aspects are particularly worrying.

The old adage that criminals are among the most enthusiastic early adopters of new technologies seems to be proving true once again. According to a report by blockchain analyst Elliptic, artificial intelligence (AI) is increasingly being used in crypto-related crimes, with the focus naturally being on various types of fraud.

For example, fraudsters use generative AI to create deepfakes of prominent figures, such as Elon Music, former Singapore Prime Minister Lee Hsien Loong, or former Taiwanese presidents Tsai Ingwen and Lai Ching-te. These then lure gullible victims into fraudulent projects on platforms such as YouTube or TikTok, not much different from the long-running “Lion’s Den” scam, where an existing, trustworthy authority is abused for a crypto scam. Only thanks to AI, it is even more cunning and sophisticated.

AI is also often used as a buzzword to promote tokens or investment programs. One example is the iEarn trading bot scam of 2023, which promised to generate nice profits for its investors on the crypto markets using the new miracle technology. It ended in losses of several million and a warning from the Commodity Futures Trading Commission (CFTC). According to a graphic from the report, thousands of tokens are circulating on blockchains such as BNB, Solana and Ethereum, advertising with buzzwords such as “GPT”, “OpenAI” or “Bard”.

AIs such as large language models (LLMs) are also useful for identifying vulnerabilities in open source code. Microsoft and OpenAI also report that more and more cybercriminals and hackers are using LLMs. There are already paid tools for hackers such as HackedGPT or WormGPT.

Finally, according to Elliptic, AI serves as a kind of turbocharger for disinformation campaigns. Social media posts, in text, images and perhaps video, are automatically generated, and possibly the associated infrastructure such as accounts and fake websites. This disinformation is often part of a scam. There are even “scam-as-a-service” service providers who claim to use AI to automatically design websites, including search engine optimization.

Finally, AI is also making identity theft more effective. The forgery of ID cards and other documents, such as driving licenses, tax returns or electricity bills, is being perfected and simplified by AI. Here, too, there are already service providers who generate such documents for a small fee; deepfakes could also undermine video identification processes in the near future.

The last two points in particular are worrying. Because they undermine certainty, truth and identity. If AIs are used in disinformation campaigns with fake images, videos and audio tracks, it will be impossible to distinguish truth from lies; if AIs perfect the forging of identity documents, it will be impossible to verify identity online. Both of these go far beyond crypto fraud. They shake the foundations of our electronic world.

Although Elliptic emphasizes that AI has enormous positive potential, the analyst warns that time is running out. Pandora’s box has only been opened a few times, and there is still a window of opportunity to address the threats posed by this technology. But to take advantage of it, law enforcement agencies, compliance experts, AI developers and others must work together decisively.


Discover more from BitcoinBlog.de – the blog for Bitcoin and other virtual currencies

Sign up to receive the latest posts via email.

Source: https://bitcoinblog.de/2024/06/17/eine-perfekte-aber-unheilvolle-partnerschaft/



Leave a Reply