Cybercriminals are increasingly relying on human “AI models” to conduct elaborate scams, particularly in Southeast Asia. These individuals apply for roles requiring fluency in multiple languages and high availability – up to 100 deepfake video calls per day – to manipulate victims into fraudulent cryptocurrency and romance schemes. The practice highlights the convergence of human exploitation and artificial intelligence in modern fraud.
The New Face of Scams: AI Models and Deepfake Technology
A disturbing trend has emerged in the world of cybercrime: the recruitment of “AI models” or “real face” models. These individuals, many young women from countries like Uzbekistan, Turkey, and Russia, are hired to appear on deepfake video calls with potential victims. The purpose? To establish trust and credibility in scams that often involve cryptocurrency investments or fabricated romantic relationships.
These models aren’t just passive participants. Some applicants boast years of experience in scamming, detailing how they use persuasion techniques to convince victims to part with their money. One application even advertised experience in “love scams” and “crypto scamming platforms.” The scale is staggering; recruitment ads demand relentless schedules, sometimes requiring 150 calls per day.
The Human Cost: Forced Labor and Exploitation
The recruitment process itself raises serious ethical concerns. Job postings often omit key details about the employer, requiring only photos, videos, and personal information like marital status. Some ads even state that passports will be retained “for visa and work permit management,” a tactic commonly used to trap individuals in forced labor environments.
While some AI models may be recruited willingly, the line between voluntary participation and exploitation blurs quickly. Victims of human trafficking are often coerced into these roles, while others face harsh treatment, including physical abuse and sexual harassment, according to anti-trafficking organizations. The lack of transparency makes it difficult to determine the extent of coercion.
Telegram’s Role: A Hub for Recruitment
Telegram has become a primary platform for recruiting AI models. Dozens of channels openly advertise these positions, often in known scam hubs like Cambodia. Despite Telegram’s claims that scamming-related content is forbidden, many recruitment channels remain active, indicating lax enforcement.
Researchers and investigators note red flags in the job postings: high salaries for the region, demands for Chinese language skills, and frequent references to “clients” (a euphemism for victims) and cryptocurrency investments. One post explicitly listed “love scam” as a job market, highlighting the blatant criminality of these operations.
The Evolution of Fraud: From Stolen Images to Live Deepfakes
The rise of AI models represents an escalation in cybercrime tactics. Previously, scammers relied on stolen images or celebrity impersonations to build rapport with victims. Now, live deepfake calls offer a new level of realism. When victims request video verification, these models step in, providing a convincing face for the scam.
As Frank McKenna, a fraud strategist, discovered, some models appear to operate across multiple scams, moving between contracts and exploiting victims with disturbing efficiency. This suggests a highly organized network of criminals leveraging AI technology to maximize profits.
Conclusion
The use of AI models in scams underscores the growing sophistication of cybercrime. The convergence of deepfake technology, human exploitation, and lax platform enforcement creates a dangerous environment for potential victims. As long as demand for fraudulent schemes persists, the recruitment and abuse of AI models will likely continue to expand.
