原生自适应界面:人工智能可及性的新框架

内容总结:
人工智能新框架:原生自适应界面让科技更懂每个人
科技的价值在于普惠。在无障碍领域,这一点尤为重要。长期以来,往往是人在适应科技;如今,一种名为“原生自适应界面”(NAI)的新框架正致力于扭转这一局面——它让人工智能从产品设计之初就具备自适应能力,使无障碍功能成为默认设置,而非事后补充。
该框架的核心在于,将适应性深度融入产品底层,而非简单附加。例如,一个基于NAI构建的AI助手可以在用户指导下,智能调整自身以提供更个性化、更易用的体验。在研究原型中,主AI负责理解用户总体目标,并协调多个专用AI处理具体任务:例如为视障用户生成音频描述,或为注意力缺陷多动障碍(ADHD)用户简化页面布局。
这种设计常产生“路缘坡效应”——为特定需求开发的功能最终惠及所有人。比如,为运动障碍人士设计的语音控制应用,同样能帮助怀抱婴儿的家长。
NAI框架遵循“与我们有关的事,必须有我们参与”的核心原则。开发者与残障社群全程协作,确保解决方案切实可用。在Google.org支持下,项目正资助罗切斯特理工学院国家聋人技术研究所(RIT/NTID)、美国ARC协会等领先机构,共同开发解决实际痛点的自适应AI工具。
其中,RIT/NTID英语讲师Erin Finton利用Gemini模型开发的“语法实验室”AI辅导工具,便是典型成果。该工具将多年教学经验转化为自适应系统,根据学生技能和语言目标,生成个性化的美式手语(ASL)与英语选择题,助力学生更自主、自信地夯实双语基础。
这项与非营利组织及残障社群紧密协作的创新探索表明,当科技学会主动适应人,一个更包容、更无障碍的世界正在成为可能。
中文翻译:
原生自适应界面:人工智能无障碍化的新框架
我们坚信,科技唯有服务所有人,方能尽显其价值。在无障碍领域,这一点尤为重要。长久以来,人们不得不被动适应科技产品——而我们希望构建能主动适应用户的技术。
这正是"原生自适应界面"理念的初衷:通过人工智能技术,让无障碍功能成为产品的默认属性,而非事后补充。我们的研究目标,是打造从一开始就更个性化、更高效的辅助技术。
原生自适应界面的运作原理
与将无障碍功能作为独立"附加模块"的传统方式不同,原生自适应界面将适应性深度融入产品设计的初始阶段。例如,基于该框架构建的AI智能体能在用户指导与监督下协助完成任务,并智能调整自身配置,提供更无障碍的个性化体验。在验证该框架的原型研究中,主AI智能体可理解用户总体目标,并协同多个专项智能体处理具体任务——比如通过调整界面、缩放文本等方式优化文档可访问性。例如为视障用户生成音频描述,或为注意力缺陷多动障碍人士简化页面布局。
这种设计常产生"路缘坡效应":为特定需求开发的功能最终惠及所有人。比如为运动障碍者设计的语音控制应用,同样能帮助怀抱婴儿的家长。
与残障群体共创共建
原生自适应框架遵循"与我们相关,必由我们参与"的核心原则。开发者全程与残障社群协作,确保解决方案切实可用。在Google.org支持下,我们正资助多个服务残障群体的领先机构——如罗切斯特理工大学国家聋人技术研究所、美国ARC协会、英国RNID听力组织及Team Gleason基金会——共同开发自适应AI工具,解决现实生活中的具体障碍。
由罗切斯特理工大学英语讲师Erin Finton基于Gemini模型开发的"语法实验室"AI辅导系统,正是这种合作的典范。该项目汇聚了罗切斯特理工大学工程师、学生与谷歌团队的力量,将Erin多年积累的特殊教育课程转化为自适应工具,通过AI生成聚焦学生技能与语言目标的个性化选择题,同步提升美国手语和英语能力,让学习者以更独立自信的方式夯实双语基础。BBC商业制片团队近期为我们拍摄的专题片,生动展示了该工具如何助力Erin更好地支持学生的学习。
我们为非营利机构引领的创新实践感到振奋。相信通过与残障社群的持续协作,我们将共同构建一个更具包容性的世界。
英文来源:
Natively Adaptive Interfaces: A new framework for AI accessibility
We believe technology is at its best when it works for everyone. That’s especially true when it comes to accessibility. For too long, people have had to adapt to technology — we want to build technology that adapts to them.
That’s the idea behind Natively Adaptive Interfaces (NAI), an approach that uses AI to make accessibility a product’s default, not an afterthought. The goal of our research is to build assistive technology that is more personal and effective from the beginning.
How Natively Adaptive Interfaces work
Instead of building accessibility features as a separate, “bolted-on” option, NAI bakes adaptability directly into a product’s design from the beginning. For instance, an AI agent built with the NAI framework can help you accomplish tasks with your guidance and oversight, intelligently reconfiguring itself to deliver a more accessible, personalized experience. In our research of prototypes that helped to validate this framework, a main AI agent could be used to understand your overall goal and then work with smaller, specialized agents to handle specific tasks — like making a document more accessible by adjusting the UI and scaling text for a more personalized experience. For example, it might generate audio descriptions for someone who is blind or simplify a page’s layout for someone with ADHD.
This often creates a “curb-cut effect,” where a feature designed for a specific need ends up being helpful for everyone. A voice-controlled app designed for someone with motor disabilities, for instance, can also help a parent holding a child.
Building with and for people with disabilities
The NAI framework is guided by the core principle: “Nothing about us, without us.” Developers collaborate with the disability community throughout their design and development process, ensuring the solutions they create are both useful and usable. With support from Google.org, we’re funding leading organizations that serve disability communities — like the Rochester Institute of Technology’s National Technical Institute for the Deaf (RIT/NTID), The Arc of the United States, RNID and Team Gleason — to build adaptive AI tools for their communities that will solve real-world friction points.
Grammar Lab, is an AI-powered tutor developed by RIT/NTID English Lecturer Erin Finton and built with Gemini models. A collaborative effort between RIT/NTID engineers, students and Google, Grammar Lab transforms years of RIT/NTID and Erin’s specialized curriculum into an adaptive tool that uses AI to create individualized multiple choice questions that center students' skills and language goals in both American Sign Language (ASL) and English. This allows them to strengthen language foundations in both ASL and English with greater independence and confidence. We recently highlighted this tool in a film produced for us by BBC StoryWorks Commercial Productions, showcasing how it helps Erin better support her students' learning.
We're excited by the innovative efforts being led by nonprofits and believe that by continuing to build in collaboration with the disability community, we can help make the world a more accessible place.