Adobe实验性AI工具可单帧编辑整段视频。

内容来源:https://www.theverge.com/news/811602/adobe-max-2025-sneaks-projects
内容总结:
在近日举行的Adobe Max创新大会上,该公司展示了一系列处于实验阶段的AI编辑工具,为图像、视频及音频处理带来了突破性交互方式。这些被称作“Sneaks”的技术原型包含三大亮点功能:
「全景帧控」技术 实现了以单帧操作驱动全局视频编辑。用户仅需在首帧中框选目标并输入AI指令,即可完成人物智能擦除、背景自然填充等复杂操作,该变化将自动同步至全部画面。演示中,系统不仅能精准识别视频中的女性主体并替换背景,还可通过手绘定位生成符合物理规律的动态水洼,甚至使倒影随画面中原有猫咪的动作实时变化。
「光影重塑」工具 开创了生成式布光新维度。该功能可任意调整照片光源方向,为未开启的灯具模拟照明效果,还能通过拖拽实时控制光线的扩散范围与阴影强度。演示案例中,编辑者不仅让南瓜呈现内发光特效,更实现了场景从日间到暮色的无缝转换,同时支持自定义色温及RGB动态光效。
「语音精修」系统 解决了音频后期中的顽固难题。通过AI语音重构技术,用户可在不重新录制的前提下修正发音偏差,调整语气的欢快度或疑问感,甚至替换特定词汇而保留原声特质。该工具还具备环境音智能分离能力,可单独调节或屏蔽特定背景噪声,显著提升人声清晰度。
同期亮相的还有「材质置换」「立体转向」「三维景深」等创新工具。尽管这些实验功能尚未确定会正式集成至Creative Cloud或Firefly应用体系,但参照以往经验——如Photoshop的智能祛瑕功能正是由早期Sneaks项目演化而来,这些技术有望在不久的将来为创作者提供全新数字艺术解决方案。
中文翻译:
在Max大会上,Adobe展示了多款实验性AI工具,这些工具为照片、视频和音频编辑提供了全新的直观操作方式。这些被称为"技术前瞻"的实验项目包含以下功能:将对某一帧的修改瞬间同步至整个视频、轻松调整图像光影效果,以及修正录音中的发音错误。
其中最具视觉冲击力的"帧间同步"项目,让视频编辑者无需使用蒙版(一种耗时的人工选区方式)即可对画面内容进行增删。演示过程中,该技术通过AI识别在视频首帧选中并移除一位女性形象,随后用自然背景进行填充——其效果类似于Photoshop的上下文感知填充和移除背景功能。整个过程仅需点击几次即可自动应用至整个视频片段。
用户还可通过手绘定位结合AI文字描述的方式在视频中插入对象。这类修改同样会智能同步至全片。演示案例中,AI生成的水洼能实时反射视频中原有猫咪的运动轨迹,展现出环境感知能力。
另一款"光影操控"工具则利用生成式AI重塑照片中的光源。它不仅能调整光线方向,还能为原本未开灯的房间模拟照明效果,同时支持用户控制光晕与阴影的扩散范围。该工具甚至能添加可实时拖拽的动态光源,让光线绕人物或物体产生实时折射,例如让南瓜从内部发光,或将白昼环境转变为暮色。经处理的光源色彩也可自由调整,既能营造暖色调氛围,也能创造绚丽的RGB光效。
"语音修正"是一款通过AI提示重构语音表达的新工具,无需重新录制即可调整发音方式。用户既能改变语音的情感基调(如转换为欢快或好奇语气),也可在保留原声特质的前提下完全替换特定词汇。该工具还能智能分离环境音源,支持对特定声音进行选择性调节或静音,在提升人声清晰度的同时保持整体音频完整性。
以上仅是Adobe Max大会部分技术前瞻的亮点。其他值得关注的创新包括:"表面置换"(即时改变物体材质纹理)、"立体编辑"(像处理3D图像般旋转调整画面对象)以及"空间深度"(将照片作为三维空间进行编辑,自动识别插入对象与环境的前后遮挡关系)。更多详细技术解析可参阅Adobe官方博客。
需要注意的是,这些技术前瞻尚未对外开放,也未必会全部纳入Creative Cloud创意套件或Firefly应用程序。但此前诸如Photoshop的干扰移除与色彩协调等功能均是从技术前瞻逐步发展为正式功能,因此创意工作者未来很有机会用上这些实验技术的某个版本。
英文来源:
Adobe demonstrated some of the experimental AI tools it’s working on at its Max conference that provide new ways to intuitively edit photos, videos, and audio. These experiments, called “sneaks,” include tools that instantly apply any changes you make to one frame across an entire video, easily manipulate light in images, and correct mispronunciations in audio recordings.
Adobe’s experimental AI tool can edit entire videos using one frame
Here are the coolest developmental features that Adobe showcased at Max.
Here are the coolest developmental features that Adobe showcased at Max.
Project Frame Forward is one of the more visually impressive sneaks, allowing video editors to add or remove anything from footage without using masks — a time-consuming process for selecting objects or people. Instead, Adobe’s demonstration shows Frame Forward identifying, selecting, and removing a woman in the first frame of a video, and then replacing her with a natural-looking background-similar to Photoshop tools like Context-aware Fill or Remove Background. This removal is automatically applied across the entire video in a few clicks.
Users can also insert objects into the video frame by drawing where they want to place it and describing what to add with AI prompts. These changes will similarly be applied across the whole video. The demonstration shows that these inserted objects can also be contextually aware, showing a generated puddle that reflects the movement of a cat that was already in the video.
Another tool is Project Light Touch, which uses generative AI to reshape light sources in photos. It can change the direction of lighting, make rooms look as if they were illuminated by lamps that weren’t switched on in the original image, and allows users to control the diffusion of light and shadow. It can also insert dynamic lighting that can be dragged across the editing canvas, bending light around and behind people and objects in real time, such as illuminating a pumpkin from within, and turning the surrounding environment from day to night. The color of these manipulated light sources can also be adjusted, letting you tweak warmth or create vibrant RGB-like effects.
Project Clean Take is a new editing tool that can change how speech is enunciated using AI prompts, removing the need to re-record video or audio clips. Users can change the delivery or emotion behind someone’s voice — making them sound happier or inquisitive, for example — or replace words entirely while preserving the identifying characteristics of the original speaker’s voice. It can also automatically separate background noises into individual sources so that users can selectively adjust or mute specific sounds, helping to preserve the overall audio while improving voice clarity.
These are just a handful of sneaks that were showcased at Adobe’s Max event. Other notable mentions include Project Surface Swap, which lets you instantly change the material or texture of objects and surfaces, Project Turn Style for editing objects in images by rotating them like a 3D image, and Project New Depths, which lets you edit photographs as if it were a 3D space that identifies when inserted objects should be partially obscured by the surrounding environment. You can read more about each sneak preview in detail over on Adobe’s blog.
Sneaks aren’t publicly available to use, and they’re not guaranteed to become official features in Adobe’s Creative Cloud software or Firefly apps. Many features, like Photoshop’s Distraction Removal and Harmonize tools, initially started out as sneaks projects, however, so there’s a good chance that some version of these experimental capabilities will be available to creatives in the future.