好的,领英的算法是怎么回事?

内容来源:https://techcrunch.com/2025/12/12/ok-whats-going-on-with-linkedins-algo/
内容总结:
近期,一项名为#穿裤子(#WearthePants)的实验在职场社交平台领英(LinkedIn)上引发广泛关注。多名女性用户将个人资料中的性别改为男性后,发现所发帖文的曝光量出现显著增长,这使外界质疑该平台的算法是否对女性存在隐性偏见。
实验发起者之一、产品策略师米歇尔(化名)表示,自己在拥有超过1万粉丝的情况下,帖文曝光量却与仅拥有约2000粉丝的丈夫相近。她将个人资料改为男性身份并调整写作风格后,曝光量单周内增长200%,互动量上升27%。另一位创业者玛丽琳·乔伊纳在将性别改为男性后,帖文曝光量一天内激增238%。多名参与实验的女性均反馈了类似经历。
对此,领英回应称,其算法和人工智能系统“不会使用年龄、种族或性别等人口统计信息作为决定内容在信息流中可见性的信号”,并强调“个别用户的内容曝光差异并不自动意味着不公平待遇或偏见”。公司表示,平台通过测试数百万条帖文来连接用户与机会,人口统计数据仅用于确保不同创作者的内容能公平竞争。
数据伦理顾问布兰迪斯·马歇尔指出,社交平台的算法是一个复杂的系统,受多种因素影响,性别修改可能只是触发了其中某个“杠杆”。她认为,许多平台的人工智能模型在训练过程中可能“内在嵌入了白人、男性、西方中心的视角”,因为训练数据本身包含人类社会存在的偏见。
康奈尔大学计算机科学助理教授莎拉·迪恩补充说,用户的个人资料、职业背景、互动行为等都可能影响算法对其内容的推荐。领英方面透露,其系统会参考数百个信号来筛选内容,包括个人资料、人脉网络和活动洞察等。
尽管领英表示已持续调整算法以减少偏见,但用户对平台缺乏透明度普遍感到困惑。有数据科学家称,近期帖文曝光量从数千骤降至数百,打击了创作者的积极性。也有男性用户表示,专注于特定领域、提供清晰价值的内容仍能获得良好曝光。
目前,关于#穿裤子实验的结果尚无定论。专家认为,这可能涉及算法对写作风格的隐性偏好——例如更简洁直接的表达(通常被视为男性化风格)更容易被推荐,而温和感性的文风则可能处于劣势。然而,由于平台算法细节不公开,用户难以确认真实原因。
随着领英用户基数增长,平台发帖量同比上升15%,评论量增长24%,内容竞争日趋激烈。领英建议,发布职业见解、行业分析、教育类等专业内容更容易获得推荐。但许多用户仍呼吁平台提高算法透明度,尽管这在商业实践中难以实现。
这场实验反映出,在人工智能日益主导内容分发的时代,如何确保算法公平性、避免强化社会固有偏见,仍是科技公司需要持续面对的挑战。
中文翻译:
十一月的某天,一位我们姑且称为米歇尔(化名)的产品策略师登录领英账户,将性别切换为男性。她告诉科技媒体TechCrunch,自己同时把名字改成了迈克尔。
她正在参与一项名为#穿裤子实验的活动。女性用户通过该实验测试一个假设:领英的新算法存在对女性的偏见。
数月来,这个职场社交平台的重度用户抱怨互动量和内容曝光量下降。此前,领英工程副总裁蒂姆·尤尔卡于八月表示,平台"近期"已部署大语言模型来筛选对用户有价值的内容。
米歇尔(TechCrunch知晓其真实身份)对此感到怀疑——她拥有超过1万名关注者,还常替仅有约2000名关注者的丈夫代笔发帖,但两人帖文的曝光量却不相上下。"唯一的显著变量就是性别。"她说。
创始人玛丽琳·乔伊纳也修改了个人资料中的性别。她坚持在领英发帖两年,最近几个月发现帖文能见度下降。"我把性别从女改为男后,24小时内曝光量激增238%。"其他参与者梅根·科尼什、罗茜·泰勒、杰西卡·多伊尔·梅克斯、艾比·奈达姆、费莉西蒂·孟席斯、露西·弗格森等人也报告了类似结果。
算法黑箱之谜
领英声明其"算法和人工智能系统不会将年龄、种族或性别等人口统计信息作为决定动态流内容可见度的信号",并称"动态流中并排展示的更新内容若曝光量不完全对等,并不自动意味着不公平对待或偏见"。社交算法专家认为,显性性别歧视或许不是主因,但隐性偏见可能暗中作祟。
数据伦理顾问布兰迪斯·马歇尔指出,平台是"由算法构成的精密交响乐,同时持续拉动特定的数学与社会杠杆"。"修改头像和姓名仅是其中一根杠杆。"她补充说,算法还受用户历史及当前互动行为等因素影响。"我们不知道还有哪些杠杆会导致算法优先推送特定人群的内容。这问题比人们想象的更复杂。"
兄弟会代码
穿裤子实验始于企业家辛迪·加洛普和简·埃文斯。她们请两位男性发布与自己相同的内容,想验证性别是否是女性互动量下降的根源。加洛普和埃文斯合计拥有超15万关注者,而两位男性当时仅约9400名关注者。加洛普的帖文仅触达801人,而发布相同内容的男性触达10408人——超过其粉丝总数的100%。随后更多女性加入实验,其中像乔伊纳这样依靠领英推广业务的人开始担忧:"我真心希望领英能对算法中可能存在的偏见负责。"
但领英与其他依赖大语言模型的平台一样,对内容筛选模型的训练细节讳莫如深。马歇尔指出,由于训练者的背景,这类平台大多"先天嵌入了白人男性中心的西方视角"。研究表明,流行的大语言模型存在性别歧视、种族主义等人为偏见,因为训练数据来自人类生成的内容,且人类常直接参与后期训练或强化学习。不过,任何公司的具体算法实现仍笼罩在黑箱迷雾中。
领英坚称该实验无法证明存在针对女性的性别偏见。尤尔卡八月的声明及领英负责任人工智能治理主管萨克希·贾恩十一月再次强调,系统不会将人口统计信息作为可见度信号。公司向TechCrunch解释,他们通过测试数百万帖文来连接用户与机会,人口数据仅用于检验"不同创作者是否公平竞争,以及动态流体验是否在不同受众间保持一致性"。
隐性偏见的蛛丝马迹
马歇尔认为,未知变量或许能解释为何部分女性修改性别后曝光量增加。例如参与病毒式传播趋势可能提升互动;某些账户久违发帖也可能获得算法奖励。语气和写作风格或许也有影响。米歇尔表示,以"迈克尔"身份发帖的那周,她模仿为丈夫代笔时的简练直接风格,结果曝光量跃升200%,互动量增长27%。她推断系统虽非"显性歧视",但似乎将女性常用沟通风格"默认为低价值内容"。
刻板的男性写作风格被认为更简洁,而女性刻板印象则偏向柔和感性。如果大语言模型被训练为推崇符合男性刻板印象的文字,便构成隐性偏见。康奈尔大学计算机科学助理教授莎拉·迪恩指出,领英等平台除用户行为外,常综合整个个人资料来决定内容推送,包括职业信息和日常互动内容类型。"人口统计可能影响算法的'双向作用'——既决定用户看到什么,也决定谁能看到他们的发帖。"
领英向TechCrunch透露,其人工智能系统会参考数百个信号来决策推送内容,包括个人资料、人脉网络和活动洞察。发言人表示:"我们持续测试以帮助用户获取最相关、最及时的职场内容。成员行为也塑造着动态流——点击、收藏、互动等行为每日变化,偏好的内容形式亦然。这些行为自然影响着动态流的内容呈现。"
活跃于领英的销售专家查德·约翰逊描述称,新算法降低了点赞、评论和转发的权重。他在帖文中写道:"大语言模型系统'不再关心发帖频率或时段,而关注文字是否体现理解力、清晰度和价值'。"这些因素使得#穿裤子实验的结果难以归因。
困惑的创作者们
无论如何,似乎许多不同性别的用户都不喜欢或不理解领英的新算法。数据科学家莎伊维·瓦库卢告诉TechCrunch,她五年来日均至少发一帖,以往能获得数千曝光,如今与丈夫的帖文仅数百曝光。"这对拥有大量忠实粉丝的内容创作者来说很打击士气。"一位男性用户表示过去几个月互动量下降约50%,但另一位男性同期曝光量和触达率却增长超100%:"这主要因为我针对特定受众撰写特定主题内容,而这正是新算法奖励的。"
马歇尔根据自身经历发现,作为黑人女性,涉及种族话题的帖文比专业经验帖表现更好。"如果黑人女性只在谈论黑人女性时获得互动,讨论专业领域时却无人问津,这就是偏见。"迪恩则认为算法可能只是放大"既存信号"——某些帖子受推荐并非因为作者 demographics,而是平台历史互动数据使然。马歇尔的个案虽可能触及隐性偏见领域,但尚不足以下定论。
领英透露了当前表现良好的内容类型:随着用户基数增长,平台发帖量同比增长15%,评论量增长24%,"这意味着动态流竞争加剧"。关于专业洞见、职场经验、行业新闻分析、工作商务及经济教育类内容均表现优异。米歇尔道出许多人的心声:"我需要透明度。"然而内容筛选算法向来是企业的核心机密,透明度可能导致规则被操纵,这个诉求恐怕难以实现。
英文来源:
One day in November, a product strategist we’ll call Michelle (not her real name), logged into her LinkedIn account and switched her gender to male. She also changed her name to Michael, she told TechCrunch.
She was partaking in an experiment called #WearthePants where women tested the hypothesis that LinkedIn’s new algorithm was biased against women.
For months, some heavy LinkedIn users complained about seeing drops in engagement and impressions on the career-oriented social network. This came after the company’s vice president of engineering, Tim Jurka, said in August that the platform had “more recently” implemented LLMs to help surface content useful to users.
Michelle (whose identity is known to TechCrunch) was suspicious about the changes because she has more than 10,000 followers and ghostwrites posts for her husband, who has only around 2,000. Yet she and her husband tend to get around the same number of post impressions, she said, despite her larger following.
“The only significant variable was gender,” she said.
Marilynn Joyner, a founder, also changed her profile gender. She’s been posting on LinkedIn consistently for two years and noticed in the last few months that her posts’ visibility declined. “I changed my gender on my profile from female to male, and my impressions jumped 238% within a day,” she told TechCrunch.
Megan Cornish reported similar results, as did Rosie Taylor, Jessica Doyle Mekkes, Abby Nydam, Felicity Menzies, Lucy Ferguson, and so on.
Join the Disrupt 2026 Waitlist
Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.
Join the Disrupt 2026 Waitlist
Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.
LinkedIn said that its “algorithm and AI systems do not use demographic information such as age, race, or gender as a signal to determine the visibility of content, profile, or posts in the Feed” and that “a side-by-side snapshot of your own feed updates that are not perfectly representative, or equal in reach, do not automatically imply unfair treatment or bias” within the Feed.
Social algorithm experts agree that explicit sexism may not have been a cause, although implicit bias may be at work.
Platforms are “an intricate symphony of algorithms that pull specific mathematical and social levers, simultaneously and constantly,” Brandeis Marshall, a data ethics consultant, told TechCrunch.
“The changing of one’s profile photo and name is just one such lever,” she said, adding that the algorithm is also influenced by, for example, how a user has and currently interacts with other content.
“What we don’t know of is all the other levers that make this algorithm prioritize one person’s content over another. This is a more complicated problem than people assume,” Marshall said.
Bro-coded
The #WearthePants experiment began with two entrepreneurs — Cindy Gallop and Jane Evans.
They asked two men to make and post the same content as them, curious to know if gender was the reason so many women were feeling a dip in engagement. Gallop and Evans both have sizable followings — more than 150,000 combined compared to the two men who had around 9,400 at the time.
Gallop reported that her post reached only 801 people, while the man who posted the exact same content reached 10,408 people, more than 100% of his followers. Other women then took part. Some, like Joyner, who uses LinkedIn to market her business, became concerned.
“I’d really love to see LinkedIn take accountability for any bias that may exist within its algorithm,” Joyner said.
But LinkedIn, like other LLM-dependent search and social media platforms, offers scant details on how content-picking models were trained.
Marshall said that most of these platforms “innately have embedded a white, male, Western-centric viewpoint” due to who trained the models. Researchers find evidence of human biases like sexism and racism in popular LLM models because the models are trained on human-generated content, and humans are often directly involved in post-training or reinforcement learning.
Still, how any individual company implements its AI systems is shrouded in the secrecy of the algorithmic black box.
LinkedIn says that the #WearthePants experiment could not have demonstrated gender bias against women. Jurka’s August statement said — and LinkedIn’s Head of Responsible AI and Governance, Sakshi Jain, reiterated in another post in November — that its systems are not using demographic information as a signal for visibility.
Instead, LinkedIn told TechCrunch that it tests millions of posts to connect users to opportunities. It said demographic data is used only for such testing, like seeing if posts “from different creators compete on equal footing and that the scrolling experience, what you see in the feed, is consistent across audiences,” the company told TechCrunch.
LinkedIn has been noted for researching and adjusting its algorithm to try to provide a less biased experience for users.
It’s the unknown variables, Marshall said, that probably explain why some women saw increased impressions after changing their profile gender to male. Partaking in a viral trend, for example, can lead to an engagement boost; some accounts were posting for the first time in a long time, and the algorithm could have possibly rewarded them for doing so.
Tone and writing style might also play a part. Michelle, for example, said the week she posted as “Michael,” she adjusted her tone slightly, writing in a more simplistic, direct style, as she does for her husband. That’s when she said impressions jumped 200% and engagements rose 27%.
She concluded the system was not “explicitly sexist,” but seemed to deem communication styles commonly associated with women “a proxy for lower value.”
Stereotypical male writing styles are believed to be more concise, while the writing style stereotypes for women are imagined to be softer and more emotional. If an LLM is trained to boost writing that complies with male stereotypes, that’s a subtle, implicit bias. And as we previously reported, researchers have determined that most LLMs are riddled with them.
Sarah Dean, an assistant professor of computer science at Cornell, said that platforms like LinkedIn often use entire profiles, in addition to user behavior, when determining content to boost. That includes jobs on a user’s profile and the type of content they usually engage with.
“Someone’s demographics can affect ‘both sides’ of the algorithm — what they see and who sees what they post,” Dean said.
LinkedIn told TechCrunch that its AI systems look at hundreds of signals to determine what is pushed to a user, including insights from a person’s profile, network, and activity.
“We run ongoing tests to understand what helps people find the most relevant, timely content for their careers,” the spokesperson said. “Member behavior also shapes the feed, what people click, save, and engage with changes daily, and what formats they like or don’t like. This behavior also naturally shapes what shows up in feeds alongside any updates from us.”
Chad Johnson, a sales expert active on LinkedIn, described the changes as deprioritizing likes, comments, and reposts. The LLM system “no longer cares how often you post or at what time of day,” Johnson wrote in a post. “It cares whether your writing shows understanding, clarity, and value.”
All of this makes it hard to determine the true cause of any #WearthePants results.
People just dislike the algo
Nevertheless, it seems like many people, across genders, either don’t like or don’t understand LinkedIn’s new algorithm — whatever it is.
Shailvi Wakhulu, a data scientist, told TechCrunch that she’s averaged at least one post a day for five years and used to see thousands of impressions. Now she and her husband are lucky to see a few hundred. “It’s demotivating for content creators with a large loyal following,” she said.
One man told TechCrunch he saw about a 50% drop in engagement over the past few months. Still, another man said he’s seen post impressions and reach increase more than 100% in a similar time span. “This is largely because I write on specific topics for specific audiences, which is what the new algorithm is rewarding,” he told TechCrunch, adding that his clients are seeing a similar increase.
But in Marshall’s experience, she, who is Black, believes posts about her experiences perform more poorly than posts related to her race. “If Black women only get interactions when they talk about black women but not when they talk about their particular expertise, then that’s a bias,” she said.
The researcher, Dean, believes the algorithm may simply be amplifying “whatever signals there already are.” It could be rewarding certain posts, not because of the demographics of the writer, but because there’s been more of a history of response to them across the platform. While Marshall may have stumbled into another area of implicit bias, her anecdotal evidence isn’t enough to determine that with certainty.
LinkedIn offered some insights into what works well now. The company said the user base has grown, and as a result, posting is up 15% year-over-year while comments are up 24% YOY. “This means more competition in the feed,” the company said. Posts about professional insights and career lessons, industry news and analysis, and education or informative content around work, business, and the economy are all doing well, it said.
If anything, people are just confused. “I want transparency,” Michelle said.
However, as content-picking algorithms have always been closely guarding secrets by their companies, and transparency can lead to gaming them, that’s a big ask. It’s one that’s unlikely ever to be satisfied.