«

人工智能诈骗正迎来"模特"新面孔。

qimuai 发布于 阅读:5 一手编译


人工智能诈骗正迎来"模特"新面孔。

内容来源:https://www.wired.com/story/models-are-applying-to-be-the-face-of-ai-scams/

内容总结:

近期,柬埔寨及东南亚地区出现大量以“AI脸模”或“真人脸模”为幌子的招聘信息,实则为跨国电信诈骗集团招募人员进行深度伪造视频通话,以实施针对欧美受害者的“杀猪盘”等诈骗活动。

24岁的乌兹别克斯坦女性Angel在应聘视频中自称掌握多国语言,可立即到岗,并有一年“AI脸模”经验。然而,这类岗位的实际工作内容是利用AI换脸技术,全天候与潜在诈骗对象进行视频通话,建立信任后诱导其进行虚假加密货币或黄金投资。

调查发现,此类招聘主要通过Telegram等平台发布,应聘者多来自土耳其、俄罗斯、乌克兰及亚洲多国。招聘广告常要求应聘者提供身高体重、婚姻状况甚至“疫苗接种状态”,承诺高额月薪(最高达7000美元),但工作条件苛刻:每日需进行多达100至150次视频通话,每月仅有一天完整休假,且护照常被公司扣留。

尽管部分应聘者可能出于自愿并享有一定自由,但反诈组织指出,多数招聘所在地为柬埔寨西哈努克市等已知诈骗窝点集中地,工作内容常隐晦提及“客户服务”“加密货币投资”或“爱情骗局”,实为诈骗活动。更有受害者证言,部分所谓“模特”在园区内遭受殴打或性骚扰。

目前,Telegram平台虽明令禁止诈骗内容,但相关招聘频道仍大量存在。网络安全专家提醒,高薪招聘“AI脸模”且工作地点位于东南亚诈骗高危区域时,需高度警惕其背后可能涉及的非法囚禁、人口贩卖及有组织诈骗风险。

中文翻译:

求职时,安吉尔会极力推销自己的语言技能。"我能说流利的英语,中文也不错,还会说俄语和土耳其语,"这位24岁、光彩照人的乌兹别克斯坦女性在给招聘方录制的自拍式视频中介绍道。她说自己当天刚抵达柬埔寨西哈努克市,随时可以开始工作。

然而,这些令人印象深刻的语言技能,很可能被用于针对美国人的精密"杀猪盘"骗局。因为安吉尔应聘的并非传统企业职位,而是"AI换脸模特"——整天坐在电脑前,通过深度伪造视频通话操控潜在诈骗对象。她的求职资料除了身高体重,还注明已有"1年AI模特经验"。

从事这类工作的远不止安吉尔一人。《连线》杂志审查了数十个发布于Telegram的招聘视频和广告,发现来自土耳其、俄罗斯、乌克兰、白俄罗斯及多个亚洲国家的人员,正在应聘柬埔寨及东南亚地区的AI模特或"真人脸模特"。该地区已成为庞大工业化诈骗窝点的聚集地,数千名人口贩运受害者被囚禁其中,被迫从事加密货币投资和情感诈骗。

这些高科技、规模达数十亿美元的犯罪集团不仅诱骗人们进入诈骗园区工作,还会吸引人员主动应聘相关"职位"。越南反诈骗非营利组织ChongLuaDao的网络犯罪调查员吴明孝表示:"从去年至今,他们也开始招聘AI模特。他们会提供软件,用AI技术换脸实施情感诈骗。"这位改过自新的前黑客如今专门追踪诈骗窝点活动并援助受害者,他在Telegram上发现了约二十个频道发布该地区AI模特招聘信息。

反人口贩运组织"人类研究咨询"也追踪到多人在Telegram上应聘"知名诈骗枢纽城市"的"模特"和"AI模特"职位,包括安吉尔的申请。

AI模特的兴起正值网络犯罪集团广泛采用人工智能技术,将换脸作为网络诈骗手段。诈骗分子通常会在社交媒体或通讯平台用虚假身份联系潜在受害者,常盗用名人或俊男美女照片诱使对方交流。

建立联系后,他们会通过密集互动培养感情,最终骗取钱财。有时多个诈骗者会共同操控一个账号,用同一虚假身份与受害者沟通。但当潜在受害者要求视频通话验证身份时,深度伪造视频和换脸模特就会派上用场。部分东南亚诈骗中心设有专门的"AI房间"用于此类通话。

《连线》查阅的AI模特或"真人模特"招聘广告要求超长工作时间、极少休假且排班密集。这些由频道管理员发布的广告通常不列联系方式或具体雇主信息。某份据称为期六个月的招聘启事要求应聘者每日发送照片、进行视频语音通话、录制音视频信息,并注明"每日约100通视频呼叫"。

另一些招聘帖显示每日通话量可达150次。"可使用滤镜,但需确保图像逼真。允许实拍视频,禁止使用假发,"一则广告写道。作为"福利",应聘者每月可享1天全假和4个半天假期。还有广告将工作时间设定为柬埔寨时间晚10点至早10点,且偏好"西方口音"。某模特招聘广告明确:"公司将为签证和工作许可管理保留您的护照。"扣押护照正是诈骗园区控制人员的主要手段之一。

尽管有少数男性应聘AI模特,但《连线》观察到绝大多数申请者都是20岁出头的年轻女性。应聘者需提交自我介绍短视频、经验期望说明文字及个人照片,部分还需注明婚姻状况和"疫苗接种"情况。

"三年多来,我与多家中国公司合作过股市、加密货币和情感故事等项目,"一人在招聘视频中说道。另一人表示:"凭借经验,我擅长客户沟通,能运用个人技巧说服他们投资,并阐述黄金交易的好处。"

由于视频申请未包含全名或联系方式,《连线》未能联系到应聘者。

据"人类研究咨询"组织数据,模特应聘者要求的月薪高达7000美元。她们还对工作条件提出具体要求,其中许多是被贩运至诈骗窝点者无法享有的。一名女性要求拥有独立房间且"可以外出",另一人则要求"休假可回家"并配备"个人洗衣机"。

与诈骗行业受害者合作的非营利组织EOS collective联合创始人李玲表示,尽管部分模特是自愿应聘且可能比人口贩运受害者享有更多自由,但仍可能遭受雇主虐待。"一位欧洲受害者告诉我们,他在园区见过意大利模特,但无法判断她们是否自愿工作,因为她们曾在其面前遭殴打,还存在性骚扰现象。"

《连线》向Telegram提供了近几个月发布AI模特及其他职位招聘的约二十个频道列表。该公司似乎未删除任何频道,但其发言人表示平台政策禁止诈骗相关活动。

Telegram发言人回应:"平台服务条款明确禁止鼓励或促成诈骗的内容,一经发现立即删除。此类情况下,人们可能有正当理由提供肖像,因此需要个案审查。"

吴明孝指出,Telegram上绝大多数模特招聘广告和申请虽未明确提及诈骗,但包含大量危险信号。"首先需要质疑为何需要AI模特,"他说。其他警示特征包括工作地点位于柬埔寨知名诈骗据点、承诺远高于当地水平的薪资,以及频繁要求中文能力。

研究人员和《连线》的审查发现,这些广告和视频申请使用的措辞也与诈骗活动高度吻合,包括频繁使用诈骗集团指代受害者的"客户"一词,以及反复提及加密货币投资或黄金交易。一位自称有18个月AI模特和"真人脸模特"经验者表示,其以往工作涉及说服他人投资:"我深知如何与客户有效沟通、建立信任、发送优质照片以及逗乐对方。"

不过部分招聘帖更为露骨,直接将申请的"就业市场"列为:"情感诈骗"。另一则帖子描述个人经验为:"3年诈骗平台加密货币客服(杀手)经历。"

去年弗兰克·麦肯纳的母亲开始收到投资诈骗短信后,他便开始拦截信息并与发送者周旋。这位反欺诈软件公司Point Predictive的首席策略师长期追踪"AI模特"现象,为弄清其运作模式,他安排了一次对方与自己母亲的视频通话。

"通话唯一目的就是证明他们是真人并获取信任,"麦肯纳说。视频中年轻女子的面部似乎使用了AI滤镜,"效果不太稳定。房间里有其他人,所以有回声。"他表示后来还与另一位AI模特进行了简短通话。

约一个月后,麦肯纳在网上看到了疑似同一模特的招聘视频,称因合同到期正在寻找新工作。"这些AI模特似乎构成了一个小圈子,完全自愿地辗转各地,收入相当可观,"他说,"他们可能整天待在视频房间里,与大量不同受害者通话。"

英文来源:

When applying for jobs, Angel talks up her language skills. “I can speak fluent English, I can speak good Chinese, I also speak Russian and Turkish,” the glamorous, 24-year-old Uzbekistani woman explains in a selfie-style video made for recruiters. Angel had arrived in the Cambodian city of Sihanoukville that day, she said, and was ready to start work immediately.
Those impressive language skills, however, have likely been put to use as part of elaborate “pig-butchering” scams targeting Americans. That’s because, instead of applying for a conventional corporate job, Angel was putting herself forward to work as an “AI face model”—sitting in front of a computer all day and making deepfake video calls to manipulate potential scam victims. Her application, which also required her height and weight, says she has already clocked up “1 year as an AI model.”
Angel is far from alone in this pursuit. A WIRED review of dozens of recruitment videos and job ads posted to Telegram show people from around the world—including Turkey, Russia, Ukraine, Belarus, and multiple Asian countries—applying to be AI models or “real face” models in Cambodia and Southeast Asia. The region has become home to vast, industrialized scamming operations that hold thousands of human trafficking victims captive and force them to run online cryptocurrency investment and romance scams.
As well as tricking people into working in scam compounds, these high-tech, multibillion-dollar criminal enterprises can also attract people into seeking “work” as part of the operations. “In the past year until today, they are also hiring people doing AI modeling,” says Hieu Minh Ngo, a cybercrime investigator at the Vietnamese scam-fighting nonprofit ChongLuaDao. “They will give you the software so they can swap their face by using AI and they can do romance scams,” he says.
Ngo, a reformed criminal hacker who now tracks scam compound activity and supports victims, identified around two dozen channels on Telegram that have some job postings for AI models in the region. Humanity Research Consultancy, an anti-human-trafficking organization, has also tracked people applying on Telegram for jobs in “known scam hub cities” as “models” and “AI models,” including Angel’s application.
The rise of AI models comes as cybercriminals are broadly adopting AI and using face-swapping as part of their online scamming. Typically, fraudsters will use fake personas to contact potential victims on social media or messaging platforms. They will often use stolen images of celebrities or attractive men or women to entice a person into talking to them.
Once they make contact, they will then bombard them with attention to help build up a relationship, before trying to get them to part with their cash. In some instances, multiple people may control the scammers’ account and message the victim under a single fake persona. But if a potential victim asks for a video call during these interactions—to check if the person they are speaking to is real, for instance—that’s when deepfake video calls and models who have their faces swapped can be used. Some Southeast Asian scam centers have dedicated “AI rooms” where the calls are made from.
Job advertisements for AI models or “real models” reviewed by WIRED demand excessive working hours, offer little free time, and require a relentless schedule. The ads are usually posted by a channel administrator and don’t include contact details or list who someone would specifically be working for. One recruitment post for an alleged six-month contract says the person will need to send photos daily, make video and voice calls, and create audio and video messages. “Approximately 100 video calls per day,” the post says.
Other posts list up to 150 potential calls per day. “Filters may be used, but ensure the image is realistic. Live-action videos are permitted; wigs are prohibited,” another ad reads. For the privilege, the person would allegedly get one full day and four half days off per month. Yet another ad lists working hours as between 10 pm and 10 am in Cambodia and a preference that the person will have a “Western accent.” One model-job ad says: “The company will retain your passport for visa and work permit management.” Taking people’s passports is one of the primary ways scam compound operators hold people captive.
While a few men apply for the AI model roles, the vast majority of applications viewed by WIRED were from young women, mostly in their early twenties. Applicants are asked to send a short video introducing themselves, text about their experience and expectations and photographs of themselves; some are required to include their marital status and “vaccination” status.
"For over three years, I have worked with Chinese companies for different kinds of projects including stock market, cryptocurrency, and love story,” one person says in a recruitment video. Another says: “Based on my experience, I am good handling customer, I persuade them to invest by using my own techniques and discussing how gold trading benefits them."
The video applications do not contain full names or contact details, so WIRED was unable to contact those applying for roles.
Modeling applicants have requested salaries of up to $7,000 per month, according to Humanity Research Consultancy. They also make specific requests about their working conditions, many of which may not be afforded to people who have been trafficked into the scam operations. One woman requested her own room and that she “can go outside.” Another requested that they could “go home on day off” and have a “personal washing machine.”
Although some of the models are recruited to work in the roles and may get more freedoms than victims of human trafficking, says Ling Li, the cofounder of the nonprofit EOS collective which works with victims of the scam industry, they may still face harsh treatment from bosses. “One European victim told us that he saw some Italian models in his compound, but he cannot tell [if] they are [there] willingly or not because they were beaten in front of him,” she says. “And also there is some sexual harassment.”
WIRED sent Telegram a list of two dozen jobs channels and recruitment channels that have advertised AI models, alongside other roles, in recent months. The company did not appear to remove any of the channels; however, a spokesperson says its policies do not allow scamming-related activity to take place.
“Content that encourages or enables scams is explicitly forbidden by Telegram's terms of service and is removed whenever discovered,” a spokesperson for Telegram says. “In cases such as this, there are legitimate reasons one might give their likeness, and so such content must be examined on a case-by-case basis."
The vast majority of the model-job ads and applications on Telegram don’t specifically mention scamming work, but they include a host of red flags indicating scamming, Ngo says. “Why [do you] need AI model? That’s the first question,” Ngo says. Other warning signs include the locations being in known scamming sites in Cambodia, claims of high salaries for the region, and frequent requirements for Chinese language skills, Ngo says.
The ads and video applications also include language closely aligned with scams, according to researchers and WIRED’s review of the posts. This includes frequent mentions of “clients,” a term scam operations use instead of “victims,” plus frequent references to cryptocurrency investments or gold trading. One person, who claimed to have been working as an AI model and “real face” model for 18 months, said their previous work involved convincing people to invest: “I really know how to make good communication to a client, how to make them trust us, how to send a good picture to them, and how to make them laughing.”
However, some posts are more explicit, listing a “job market” someone was applying for as: “love scam.” Another post describes a person’s experience as: “3 year as customer service (killer) of scamming platform crypto.”
After Frank McKenna’s mom started getting scam text messages about making investments last year, he began intercepting them and talking with the senders. McKenna, the chief strategist at anti-fraud software firm Point Predictive who has closely tracked “AI models,” says he wanted to understand how they were operating, so he set up a video call between them and his mom.
“The only purpose of that call was to prove that they’re a real person and to gain trust,” McKenna says. During the call, he says, the young woman on camera appeared to be using an AI filter on her face. “It’s kind of glitchy. There’s other people in the room with her, so there’s echoing,” he says. “Then we had another short call with another AI model.”
A month or so later, McKenna says, he saw what appeared to be the same model’s recruitment video posted online, saying she was looking for a new contract as hers had expired. “It was kind of a small world of these AI models who seem to go from place to place, completely voluntarily, making pretty good money,” McKenna says. “They’re probably just in the video room doing calls all day with tons of different victims.”

连线杂志AI最前沿

文章目录


    扫描二维码,在手机上阅读