Crypto Scam Ring Used AI and Geopolitical Posts to Go Viral…
How Did the Scam Network Operate?
A coordinated network of accounts on X used exaggerated and sometimes fabricated geopolitical posts to lure users into crypto scams, according to blockchain investigator ZachXBT. In findings shared Monday, the investigator said more than 10 accounts were linked to the scheme, many of which had been acquired with pre-existing follower bases.
These accounts pushed high-engagement content focused on war narratives and political developments, often framed in sensational terms. Once posts gained traction and reached large audiences, the operators shifted tactics, promoting fraudulent crypto giveaways and tokens designed to extract funds from unsuspecting users.
“Onchain evidence suggests the scheme profited six figures,” ZachXBT said, adding that the group appeared to be systematically farming engagement and could be preparing to launch additional campaigns.
Investor Takeaway
Why Were Geopolitical Posts Central to the Strategy?
According to the investigation, the scam relied on a two-phase approach. First, accounts posted emotionally charged or misleading updates about wars and political tensions. These posts were designed to spread quickly, attracting millions of views and drawing in users from outside typical crypto audiences.
Once engagement peaked, the same accounts pivoted to monetization. They introduced scam links, fake token launches, or giveaway promotions, often framed as exclusive opportunities tied to the earlier viral narrative. One example cited was a pump-and-dump scheme involving a token referred to as Oramama on Feb. 22.
The strategy relied not only on the original posts but also on amplification. ZachXBT noted that large accounts interacting with the content—through replies or reposts—helped extend its reach, often without realizing they were boosting fraudulent campaigns.
How Did AI and Impersonation Play a Role?
The accounts reportedly used artificial intelligence tools to mimic well-known social media figures, including influencers such as Mario Nawfal. By replicating tone, style, and posting patterns, the operators were able to build credibility quickly and reduce suspicion among followers.
This form of impersonation allowed scam posts to blend into existing information flows. Users encountering the content may have assumed it came from trusted sources, increasing the likelihood of engagement and, ultimately, interaction with malicious links.
The use of AI also lowered the cost and speed of running such campaigns. Instead of manually crafting posts, operators could generate large volumes of content aligned with trending narratives, enabling rapid scaling across multiple accounts.
Why Do Scams Persist Despite Platform Crackdowns?
The findings come as X continues efforts to reduce bot activity and misinformation. Last month, the platform introduced enhanced detection systems aimed at identifying automated behavior, along with user-facing labels for AI-generated content.
Despite these measures, the ZachXBT investigation highlights how coordinated networks can still operate effectively. Accounts with existing followers can bypass early scrutiny, while rapid engagement cycles make it difficult for moderation systems to respond before scams spread widely.
The persistence of these tactics suggests that platform-level controls alone may not be sufficient. The investigator argued that coordinated manipulation should lead to account bans and potential legal consequences, while also urging users to verify account history and recent activity before engaging with viral posts.
Investor Takeaway
What Does This Mean for the Crypto Market?
ZachXBT’s disclosure also points to a continuing cycle: as platforms improve detection tools, operators adapt their tactics, often shifting toward more sophisticated impersonation and narrative-driven engagement strategies. That dynamic suggests that scam activity will remain an embedded feature of crypto markets, rather than a temporary issue.


