«

Adobe MAX 2025 创意大会:创意云系列全线重磅更新速览

qimuai 发布于 阅读:6 一手编译


Adobe MAX 2025 创意大会:创意云系列全线重磅更新速览

内容来源:https://www.wired.com/story/adobe-max-2025-firefly-photoshop-updates/

内容总结:

在近日于洛杉矶举办的年度MAX大会上,软件巨头Adobe全面展示了其人工智能技术的最新突破,为创意应用矩阵注入多维度AI能力,甚至预告了与OpenAI旗下ChatGPT的整合计划。此次发布的核心聚焦于三大方向:Firefly创意工具的升级、多款产品的AI助手功能以及前瞻性技术布局。

Firefly迎来全方位进化
作为Adobe在2023年推出的生成式AI明星产品,Firefly此次获得多项重磅更新:

AI助手渗透工作流程
继Acrobat后,Adobe将AI助手扩展至Photoshop和Express应用。这款兼具教学属性的智能助手能根据用户操作场景推荐工具,在保持用户最终控制权的同时提升工作效率。目前Express版本已开放体验,Photoshop版本需预约申请。

前瞻布局勾勒未来图景
大会现场还预演了两项战略级合作:

这一系列动作标志着Adobe正加速推进“AI原生”战略,通过降低专业创意门槛与打通生态协同,持续扩大其在创意软件领域的技术护城河。

中文翻译:

《连线》推荐的所有产品均由编辑独立甄选。部分零售渠道或购买链接可能为我们带来分成收益。了解更多

Adobe正全力押注人工智能。在洛杉矶举办的年度MAX大会上,该公司为其创意软件系列推出大量新功能,几乎全都涉及AI能力升级,甚至预告了与OpenAI旗下ChatGPT的集成方案。

以下是本次更新的核心信息:

Adobe Firefly自定义模型
2023年推出的Firefly已成为Adobe的新宠,这款通过生成式AI创作图像视频的应用自然成为发布焦点。首先,企业将能使用自定义模型功能,允许创作者训练专属AI模型以生成特定角色与风格色调。

虽然企业版Firefly早已支持自定义模型,但如今这项功能将向个人用户开放。Adobe表示仅需6-12张图像即可完成角色训练,风格训练所需样本略多。所有自定义模型均基于Adobe自有Firefly模型开发,这意味着其训练数据专有且商业使用安全。该功能将于年底上线,11月开放抢先体验申请。

Firefly图像模型第五代即日发布。与第四代相同,新版支持原生4百万像素分辨率,可生成2K(2560x1440)图像。其基于指令的编辑功能可输出2MP或全高清(1920x1080)内容,核心突破在于分层图像编辑技术。

在演示中,Adobe展示了新版模型如何识别上传图像的构成元素,并允许用户通过生成功能移动、缩放及替换这些元素。以一碗拉面为例,模型能精准分离筷子并移动至画面其他区域,同时智能生成添加辣椒粉碗——所有操作均无视觉瑕疵。

Adobe Firefly新增生成功能
Firefly新增两项生成式AI功能:配乐生成与语音生成。两者均顾名思义,且设有特定保护机制。

配乐生成会扫描上传视频并推荐配乐指令。用户无需从零开始,可直接选择氛围、风格与用途进行匹配。例如为追逐场景配置紧张管弦乐。

语音生成是Adobe首次为Firefly添加文本转语音功能,融合了自有模型与ElevenLabs技术。初期支持15种语言,用户可添加情感标签。这些标签非全局生效,能针对语句不同部分调整语调变化。两项新功能即将登陆Firefly。

同时推出的还有Firefly视频编辑器。这是首次在浏览器端实现内置Firefly的多轨视频编辑。Adobe称其能整合多源素材,将生成内容与实拍影像、音频、图片进行融合。该编辑器将采用预约制,具体上线时间未定。

Photoshop与Express的AI助手
"智能体AI"已成为行业热词,特指能完成具体任务的AI助手。继Acrobat之后,Adobe现将同类功能引入Photoshop与Express。该公司强调助手将平衡"实操性与自主性",兼具Adobe软件导航教学功能。

在Photoshop或Express中,用户可召唤助手执行各类任务。它能根据使用场景推荐合适工具,同时确保用户保有最终控制权。Express的AI助手已开放试用,Photoshop版本需预约申请。

月光计划与ChatGPT集成
Adobe照例在MAX大会上预览了远期规划,重点包括月光计划与ChatGPT集成。

该公司正探索将Adobe功能直接嵌入ChatGPT界面,使用户能在同一窗口调用Adobe模型生成图像视频等内容。该项目尚处早期阶段,Adobe表示正与OpenAI(通过微软)合作,目标是将创意功能融入ChatGPT。

月光计划则是套跨应用语境承继系统,可使AI模型在Adobe创意套件中保持交互语境连贯性。用户通过绑定社交账号赋予AI语境认知,从而生成符合个人风格的内容。该系统还支持跨Adobe应用的语境传递。

月光计划在MAX大会面向参会者开启内测,公开测试即将启动,具体发布日期尚未公布。

英文来源:

All products featured on WIRED are independently selected by our editors. However, we may receive compensation from retailers and/or from purchases of products through these links. Learn more.
Adobe is leaning heavily into artificial intelligence. At the company's annual MAX conference in Los Angeles, it announced a slew of new features for its creative apps, almost all of which include some kind of new AI capability. It even teased an integration with OpenAI's ChatGPT.
Here's everything you need to know.
Custom Models in Adobe Firefly
Adobe's new darling app is Firefly, which launched in 2023 and offers the ability to create images and videos through generative AI. So it makes sense that the bulk of the announcements revolve around it. First, the company says it's opening up support for custom models, allowing creatives to train their own AI models to create specific characters and tones.
Businesses have been able to leverage custom models in Firefly for some time, but Adobe is rolling out the feature to individual customers. Adobe says you need just six to 12 images to train the model on a character, and “slightly more” to train it on a tone. The basis of the model remains Adobe’s own Firefly model, which means it’s trained on proprietary data and commercially safe to use. Custom models will roll out at the end of the year, and you can join a waiting list in November for early access.
Adobe's Firefly Image Model 5 is rolling out today. Like Image Model 4, the updated model has a native 4-megapixel resolution, meaning it can generate images at 2K (2560 x 1440). It also supports prompt-based editing, which sees edits generated at 2 MP or Full HD (1920 x 1080). The big improvement for Image Model 5, however, is layered image editing.
In a demo, Adobe showed me how the new Firefly model works. You can upload an image, and Firefly Image Model 5 will identify different elements, allowing you to move, resize, and replace those elements with generative features. In my demo, Adobe used a bowl of ramen, showing how Image Model 5 was able to cut out and move the chopsticks to a different area of the scene, as well as add a bowl of chili flakes generated by AI. And all of this was done without any visual artifacts.
New Generative Features in Adobe Firefly
Firefly is also getting two new generative AI features: Generate Soundtrack and Generate Speech. Both do exactly as their names suggest, with some specific guardrails in place.
Generate Soundtrack will scan an uploaded video and suggest a prompt for a soundtrack. Rather than deleting the prompt and starting from scratch, Adobe lets you choose a vibe, style, and purpose to find something that works. For instance, you could say you want a tense orchestral score to lay over the top of a chase scene.
Generate Speech is the first time Adobe has added text-to-speech capabilities to Firefly, leveraging its own Firefly models, as well as models from ElevenLabs. At launch, Adobe says it’ll support 15 languages with Generate Speech, and you’ll be able to add emotion tags. These tags aren’t universal, so you can add different tags to different parts of a line to change the inflection. Generate Soundtrack and Generate Speech are rolling out shortly to Firefly.
There's a new Firefly video editor, too. For the first time, you can access a full, multi-track video editor in your browser with built-in Firefly. Adobe says it’s built to combine multiple sources, bringing together generated and captured content across video, audio, and images. There will be a waiting list for the Firefly video editor, but Adobe hasn’t announced when it will release broadly.
AI Assistants in Photoshop and Express
The buzzword in the AI world of late is agentic AI—an AI assistant that completes specific tasks for you. Adobe already has such an assistant in Acrobat, but it’s bringing that same capability to Photoshop and Express. Adobe says the assistant will strike a balance between “tactile and agentic,” serving as somewhat of an educational tool to navigate Adobe’s apps.
Within Photoshop or Express, you can pull up the assistant to accomplish different tasks. It can point you toward the right tool depending on what you’re doing, while still giving you control over the final output. The AI assistant in Express is available now to try out, but you’ll need to sign up for the waiting list for Photoshop.
Project Moonlight and ChatGPT Integration
As usual, Adobe took some time at MAX to preview what’s coming down the road beyond these immediate updates. The two main announcements? Project Moonlight and a ChatGPT integration.
Adobe says it’s looking into integrating Adobe features directly into ChatGPT, using the same window to generate images, video, and more using Adobe’s models. It's in the very early stages right now, but Adobe says it’s working with OpenAI (via Microsoft), and the goal is to add its creative features into ChatGPT.
Project Moonlight, however, is a system designed to carry the context of the AI models you interact with across Adobe’s creative applications. Adobe says you’ll be able to connect your social accounts to give the AI model context, generating content that fits with your own style and tone. You'll also be able to carry context from other Adobe applications.
A private beta for Project Moonlight is launching at Adobe MAX for attendees, and a larger public beta is expected soon. Adobe hasn’t announced an official release date.

连线杂志AI最前沿

文章目录


    扫描二维码,在手机上阅读