随心而学:用生成式AI重塑教科书
内容来源:https://research.google/blog/learn-your-way-reimagining-textbooks-with-generative-ai/
内容总结:
谷歌研发团队于9月16日发布了一项基于生成式人工智能的教育领域创新成果——"Learn Your Way"交互式学习实验平台。该平台通过AI技术将传统教材转化为动态个性化学习材料,在近期针对60名学生的对照研究中,使用该平台的学生在知识保留测试中的成绩较传统数字阅读组高出11个百分点。
研究指出,传统教材存在"一刀切"的局限性,而新系统通过两大技术核心实现突破:一是基于学习者的年级和兴趣偏好(如运动、音乐等)对原始教材进行个性化重构,替换案例并调整难度层级;二是生成多种模态的内容呈现形式,包括分段式图文解读、交互测验、带旁白的教学幻灯片、模拟师生对话的音频课程以及知识导图。
该系统依托谷歌最新研发的LearnLM模型架构,整合了多步骤AI智能体工作流和专门优化的教育插图生成模型。初期实验显示,教育专家对AI生成内容的准确性、覆盖面及教学科学性评分均超过0.85(满分1分)。
除量化学习成效提升外,93%的使用者表示愿意继续使用该平台进行学习,所有体验者均反馈其增强了学习信心。目前该实验平台已登陆Google Labs开放体验,涵盖历史、物理等多学科示例。
研究人员表示,这项技术标志着教育内容从"静态传递"向"动态适配"转型的初步探索,未来将持续优化个性化教育系统的实时适应能力。
中文翻译:
随心而学:用生成式AI重塑教科书
2025年9月16日
Gal Elidan(谷歌研究院研究科学家)与Yael Haramaty(谷歌研究院高级产品经理)
教育领域生成式AI新研究展示了一种重塑教科书的创新方法,近期实验表明该方法可有效提升学习成效。这项研究已应用于我们的交互式体验平台"随心而学",现已在Google Labs开放体验。
快速入口
教科书虽是教育基石,却存在根本局限:它们采用千篇一律的标准化形式。人工编写教科书需耗费巨大精力,导致其难以提供多元视角、多形式呈现及个性化改编——而这些正是提升学习效果与吸引力的关键。谷歌正在探索如何运用生成式AI(GenAI)自动生成多样化呈现形式与个性化案例,同时保持原始材料的完整性。如果学生能自主塑造学习路径,根据动态需求通过不同形式探索知识呢?如果教科书能像每位学习者那样独一无二呢?
生成式AI的最新进展正让这一愿景照进现实。今天我们隆重推出"随心而学"研究实验项目(现已在Google Labs上线),探索如何通过GenAI改造教材,为每位学生打造更高效、更吸引人、以学习者为主导的体验。本文将概述项目的研究基础与教学理念,技术细节详见随附的技术报告。我们同时汇报早期成效指标:在效果研究中,使用"随心而学"的学生在知识保留测试中的得分比使用标准数字阅读器的学生高出11个百分点。
以学习科学为根基,为学生需求而构建
我们的方法基于两大相辅相成的核心支柱:(1)生成内容的多模态呈现形式;(2)实现个性化学习的基础建设。
经典的双重编码理论指出:在不同呈现形式间建立心理联系能强化大脑中的概念图式。后续研究证实,当学生以多种形式主动接触信息时,能构建更牢固完整的知识心理模型。受此启发,我们的方法赋予学生自主选择与混合多种形式的能力,以最佳方式理解材料。此外,个性化正逐渐成为K-12教育的发展标杆,我们的研究也呼应这一趋势。我们旨在通过适配学生特征来提升教育内容的亲和力与有效性,并引入实时测验功能,根据学习者反馈进一步定制体验——这种个性化能有效增强学习动力与深度。
为实现该愿景,我们采用分层技术方案:使用我们融汇教学法的最先进模型系列LearnLM(现已直接集成至Gemini 2.5 Pro)。第一层是独特的个性化处理流程,为第二层的多内容呈现提供基础。虽然当前以PDF版教科书为起点,该方法同样适用于其他形式的源材料。
个性化处理流程
"随心而学"界面会邀请学习者选择年级与兴趣领域(如运动、音乐、美食)。源材料首先会根据申报的年级水平调整文本难度,同时保持内容范围不变;接着系统会策略性地将通用示例替换为符合学习者兴趣的个性化案例。生成的文本将作为所有其他呈现形式的基础,有效传递个性化效果并为后续定制铺平道路。
内容的多维呈现
完成源材料个性化后,我们生成多种内容呈现形式。对于思维导图和时间轴等呈现形式,直接调用Gemini的广泛能力;旁白幻灯片等功能则需要更复杂的流程,通过协调多个专用AI智能体与工具来实现优质教学效果。最后,诸如生成高效教学插图等专项任务,即使对最先进的通用图像模型也颇具挑战。为此我们专门微调了一个生成教育插图的专用模型。通过结合强大基础模型、多步骤智能体工作流与微调组件,我们能为学习场景生成丰富的高质量多模态呈现形式。
"随心而学"体验平台
我们的研究在"随心而学"中得以生动呈现。该平台汇聚多种个性化内容形式,包括:(1)沉浸式文本;(2)章节测验;(3)幻灯片与旁白;(4)音频课程;(5)思维导图。
- 沉浸式文本:将内容分解为易消化章节,配生成图像与嵌入式问题,遵循学习科学原则将被动阅读转化为主动多模态体验
- 章节测验:通过交互式学习评估促进主动学习,揭示既有知识盲区
- 幻灯片与旁白:提供覆盖全教材的演示文稿,包含填空等互动活动及模拟录播课的旁白版本
- 音频课程:通过AI教师与学生的模拟对话(辅以视觉素材)演示真实学习场景,包括澄清学生误解的教学对话
- 思维导图:按层级组织知识体系,支持学习者从宏观框架到微观细节自由缩放
以上呈现形式均适配学习者选择的年级与兴趣偏好。在整个体验过程中,交互式测验提供动态反馈,引导学生重温薄弱知识点。这标志着我们迈向真正个性化教育的第一步。
教学效果评估
为评估"随心而学"的教学表现,我们将OpenStax(免费教科书提供商)的十份多元教材转换为三种个性化设置,涵盖从历史到物理等多学科领域。三位教学专家随后根据准确性、覆盖度及LearnLM学习科学原则等教学标准进行评估。
结果非常积极,所有教学标准的专家平均评分均达0.85分及以上。详见技术报告中的评估细节。
效能研究
AI学习工具的价值需同时体现在提升学习效果与受学生欢迎两方面。"随心而学"现已成为我们与全球合作伙伴开展研究的平台,共同探索AI转化与个性化如何影响学习成效,确保开发成果既有效又具本土相关性。
近期我们对芝加哥地区60名15-18岁、阅读水平相近的学生开展了随机对照研究。参与者需在40分钟内通过教科书学习青少年大脑发育知识,并随机分配使用"随心由学"或传统PDF阅读器。
我们在学习结束后立即进行测验,3-5天后进行保留测试(所有评估工具均由教学专家设计以有效衡量内容理解度)。同时开展了学习体验问卷调查,并为获取量化指标之外的深度洞察,对每位学生进行了30分钟定性访谈。
结果具有显著统计学意义,重点如下(详情参见技术报告):
- 积极学习成果:"随心而学"组在学习后即时评估中平均得分高出9%
- 更优长期记忆:同样地,"随心而学"组在3-5天后保留测试中得分高出11%(78% vs 67%)
- 积极用户反馈:使用"随心而学"的学生100%认为工具提升应试信心,数字阅读器对照组仅70%;93%表示希望未来继续使用该工具,数字阅读器组仅67%
- 高价值体验:定性访谈显示学生认为"随心而学"极具价值
体验"随心而学"
为提供具体感知,我们在Google Labs发布了示例体验,包括:
未来展望
研究表明生成式AI不仅能构建更有效的学习体验,更能赋能学习者。通过将静态教科书转化为交互式学习载体,赋予学生更多学习自主权,我们见证了知识保留率的提升。
这项工作仅是探索的起点。我们设想更多内容定制方式,逐步构建能持续适配学习者独特需求与进度的系统。在迈向个性化教育的征程中,我们将持续以教学原理为根基,衡量AI对学习效能的影响,让未来每位学生都能获得量身定制的高质量沉浸式学习体验。
致谢
感谢谷歌研究院LearnLM团队贡献本项工作:Alicia Martín, Amir Globerson, Amy Wang, Anirudh Shekhawat, Anisha Choudhury, Anna Iurchenko, Avinatan Hassidim, Ayça Çakmakli, Ayelet Shasha Evron, Charlie Yang, Courtney Heldreth, Dana Oria, Diana Akrong, Hairong Mu, Ian Li, Ido Cohen, Komal Singh, Lev Borovoi, Lidan Hackmon, Lior Belinsky, Michael Fink, Preeti Singh, Rena Levitt, Shashank Agarwal, Shay Sharon, Sophie Allweis, Tracey Lee-Joe, Xiaohong Hao, Yael Gold-Zamir, Yishay Mor, Yoav Bar Sinai。特别感谢执行支持者:Niv Efron, Avinatan Hassidim, Yossi Matias与Ben Gomes。
英文来源:
Learn Your Way: Reimagining textbooks with generative AI
September 16, 2025
Gal Elidan, Research Scientist, and Yael Haramaty, Senior Product Manager, Google Research
New research into GenAI in education demonstrates a novel approach to reimagining textbooks that led to improved learning outcomes in a recent study. The research comes to life in our interactive experience, Learn Your Way, now available on Google Labs.
Quick links
Textbooks are a cornerstone of education, but they have a fundamental limitation: they are a one-size-fits-all medium. The manual creation of textbooks demands significant human effort, and as a result they lack alternative perspectives, multiple formats and tailored variations that can make learning more effective and engaging. At Google, we’re exploring how we can use generative AI (GenAI) to automatically generate alternative representations or personalized examples, while preserving the integrity of the source material. What if students had the power to shape their own learning journey, exploring materials using various formats that fit their evolving needs? What if we could reimagine the textbook to be as unique as every learner?
Recent advances in GenAI are bringing this vision closer to reality. Today we are excited to introduce Learn Your Way, now on Google Labs, a research experiment that explores how GenAI can transform educational materials to create a more effective, engaging, learner-driven experience for every student. Here we outline the research and pedagogy underpinning Learn Your Way, with more details in the accompanying tech report. We also report early indicators of its impact: in our efficacy study, students using Learn Your Way scored 11 percentage points higher on retention tests than students using a standard digital reader.
Grounded in learning, built for the student
Our approach is built on two key pillars that work together to augment the learning experience: (1) generating various multimodal representations of the content, and (2) taking foundational steps toward personalization.
The seminal dual coding theory states that forging mental connections between different representations strengthens the underlying conceptual schema in our brain. Subsequent research indeed showed that when students actively engage with information in various formats, they build a more robust and complete mental model of the material. Inspired by this, our approach empowers students with the agency to choose and intermix multiple formats and modalities to best help them understand the material. In addition, personalization is increasingly becoming an aspirational standard in K-12 educational settings, and so our research reflects this. We aim to enhance the relatability and effectiveness of educational content by adapting it to student attributes. Moreover, we incorporate quizzing capabilities that enable us to further tailor the experience according to the learners’ real-time responses. Such personalization can be a powerful method for enhancing motivation and deepening learning.
Bringing this to life involves a layered technical approach using LearnLM, our best-in-class pedagogy-infused family of models, now integrated directly into Gemini 2.5 Pro. The first layer is a unique personalization pipeline that serves as the basis for the second layer of multiple content representations. Our starting point is a textbook PDF, although our approach could be used with other forms of source material.
The personalization pipeline
The Learn Your Way interface asks the learner to select their grade and interests (e.g., sports, music, food). The original source material is first re-leveled to the learner’s reported grade level, while maintaining the scope of its content. This is followed by the strategic replacement of generic examples with ones that are personalized to the learner’s reported interests. The resulting text serves as the basis for the generation of all the other representations, effectively propagating the personalization effect and setting up a pipeline for further personalization.
Multiple representations of content
Following the source personalization, we generate multiple representations of the content. For some content representations, such as mind maps and timelines, Gemini’s broad capabilities are used directly. Other features such as narrated slides, require more elaborate pipelines that weave together multiple specialized AI agents and tools to achieve an effective pedagogical result. Finally, specialized tasks, such as generating effective educational visuals, proved too challenging even for state-of-the-art general-purpose image models. To overcome this, we fine-tuned a dedicated model specifically for generating educational illustrations. The combination of a powerful base model, multi-step agentic workflows, and fine-tuned components allows us to generate a wide range of high-quality multimodal representations for learning.
The Learn Your Way experience
Our research comes to life in Learn Your Way. The interface brings together multiple, personalized representations of content including: (1) immersive text, (2) section-level quizzes, (3) slides & narration, (4) audio lessons, and (5) mind maps.
- Immersive text: Breaks the content up into digestible sections that are augmented with generated images and embedded questions. Put together, these transform passive reading into an active multimodal experience that follows learning science principles.
- Section-level quizzes: Promote active learning by allowing a user to interactively assess their learning, and uncover existing knowledge gaps.
- Slides & narration: Offers presentations that span the entire source material and include engaging activities like fill-in-the-blanks, as well as a narrated version, mimicking a recorded lesson.
- Audio lesson: Provides simulated conversations, coupled with visual aids, between an AI-powered teacher and a student that models how a real learner might engage with the material, including the expression of misconceptions, which are clarified by the teacher.
- Mind map: Organizes the knowledge hierarchically and allows learners to zoom in and out from the big picture to the details.
The above representations give learners choice and are all adapted to their selected grade level and personal interests. Throughout the experience, the interactive quizzes provide dynamic feedback, guiding students to revisit specific content areas where they struggled. This marks our first steps towards true personalization.
Pedagogical evaluation
To evaluate Learn You Way's pedagogical performance, we transformed ten varied source materials from OpenStax (a provider of free educational textbooks) to three different personalization settings. The source materials covered various subjects from history to physics. Three pedagogical subject matter experts then evaluated the transformed materials using pedagogical criteria, such as accuracy, coverage, and the LearnLM learning science principles.
The results were highly positive, with an average expert rating of 0.85 or higher across all pedagogical criteria. See the tech report for more evaluation details.
Efficacy study
An AI-powered learning tool is only valuable if it both effectively improves learning outcomes and students want to use it. Learn Your Way now serves as a research platform for us to conduct studies with partners around the world to explore how AI-powered transformations and personalization affects outcomes, and to ensure that what we build is effective and locally relevant.
Recently, we conducted a randomized controlled study with 60 students from the Chicago area, ages 15–18 and with similar reading levels. Participants were given up to 40 minutes to learn about adolescent brain development from a textbook, and randomly assigned to learn using Learn Your Way or a traditional digital PDF reader.
We assessed students with a quiz immediately after the study session, and with a retention test 3–5 days later, using assessments designed by pedagogical experts to be a good measure of content comprehension. We also surveyed them about the learning experience, and to gain deeper insights beyond these quantitative metrics, each student participated in a 30-minute qualitative interview where they could share more nuanced feedback about their experience.
The results were compelling and statistically significant. Here are the highlights. See the tech report for more details. - Positive learning outcomes: The Learn Your Way group scored, on average, 9% higher on the immediate assessment following the study session.
- Better long-term retention: Similarly, the Learn Your Way group scored 11% higher on the retention assessment 3-5 days later (78% vs. 67%).
- Positive user sentiment: 100% of students who used Learn Your Way reported that they felt the tool made them more comfortable taking the assessment, compared to 70% in the digital reader control group. 93% said they would want to use Learn Your Way for future learning, compared to just 67% for the digital reader.
- Valuable experience: Insights from the qualitative interviews revealed that students found great value in Learn Your Way.
Experience Learn Your Way
To give a concrete feel for the Learn Your Way interactive experience, today we are releasing example experiences on Google Labs, including:
The path forward
Our findings suggest that generative AI can be used to build learning experiences that are not only more effective but also more empowering. By evolving the static textbook into an interactive artifact and giving students greater agency over how they learn, we saw learning retention improve.
This work is just the beginning of our exploration. We envision many more ways to tailor content, moving towards systems that continuously adapt to each learner's unique needs and progress. As we take our next steps towards personalized education, we will continue to ground our research in pedagogical principles, measuring the impact of AI on learning efficacy, so that in the future every student might have access to a high-quality, engaging learning experience that is custom built for them.
Acknowledgements
Shout out to our Google Research LearnLM team who have contributed to this work: Alicia Martín, Amir Globerson, Amy Wang, Anirudh Shekhawat, Anisha Choudhury, Anna Iurchenko, Avinatan Hassidim, Ayça Çakmakli, Ayelet Shasha Evron, Charlie Yang, Courtney Heldreth, Dana Oria, Diana Akrong, Hairong Mu, Ian Li, Ido Cohen, Komal Singh, Lev Borovoi, Lidan Hackmon, Lior Belinsky, Michael Fink, Preeti Singh, Rena Levitt, Shashank Agarwal, Shay Sharon, Sophie Allweis, Tracey Lee-Joe, Xiaohong Hao, Yael Gold-Zamir, Yishay Mor, and Yoav Bar Sinai. Special thanks to our executive champions: Niv Efron, Avinatan Hassidim, Yossi Matias and Ben Gomes.