快来看,n8n更新了!20个最佳MCP服务器推荐:开发者如何构建自主代理工作流

内容来源:https://blog.n8n.io/best-mcp-servers/
内容总结:
模型上下文协议(MCP)从“魔法”到生产级工作流的跨越:一份面向开发者的实战指南
许多开发者初次接触模型上下文协议(MCP)时,都惊叹于其便捷性:只需将Claude等AI助手连接到本地数据库,用自然语言提问,它便能即时执行复杂的SQL查询。然而,这种“魔法”存在明显局限——一旦关闭本地IDE,智能体便随之“死亡”,无法持续响应客户邮件、按计划运行任务或触发警报,强大的工具被束缚在个人开发环境中。
本文旨在打破这一壁垒。我们将系统梳理适用于编码、数据和运维等领域的最佳MCP服务器,并展示如何利用自动化平台n8n对它们进行编排。最终,开发者将获得一套精选工具集和一套方法论,能够将临时的对话交互转化为持久、自动化的智能系统。本指南适合已了解大语言模型基础、希望构建生产级AI工作流的开发者。
如何筛选可靠的MCP服务器?
当前MCP生态发展迅猛,GitHub上相关仓库数以百计,但其中大量是实验性项目或已无人维护的“业余作品”。为过滤噪音,我们依据严格标准进行评估,核心关注点在于“生产就绪度”:
- 官方与成熟度:优先选择由供应商官方维护的服务器(如Sentry、Stripe)。若无官方版本,则选择维护活跃、采用度高的“经社区验证”项目,严格规避已被放弃的项目。
- 架构稳定性:优先支持Docker部署的服务器。通过容器化可以避免因本地Node.js版本或操作系统库差异导致的运行问题,比通过
npx直接在主机运行复杂依赖更为可靠。 - 编排潜力:评估服务器能否融入自动化流程。我们选择那些能暴露结构化工具、并可通过n8n串联成更大规模工作流的服务器,而非仅能在聊天窗口内使用的“玩具”。
精选MCP服务器分类推荐
以下列表中的服务器标注了两种关键部署标签:
- Docker:适合自托管。意味着您可以在自有基础设施上完整部署,完全掌控数据与日志。
- Remote:实现自动化的关键。意味着服务器可通过URL远程访问,使得n8n等工具能通过网络调用其功能,无需安装在同一台机器上,极大简化了云端的集成工作。
数据与存储
- PostgreSQL MCP:赋予智能体直接执行查询的能力,可即时检查数据模式并运行SELECT语句回答问题。
- Qdrant MCP Server:可作为RAG实现的向量数据库,也能充当智能体的自主长期记忆,存储和检索“记忆”或技术上下文。
- MongoDB MCP Server:官方NoSQL集成,能将自然语言问题转换为复杂的聚合管道查询。
云与基础设施可观测性
- Kubernetes MCP:安全地与Kubernetes集群交互,如列出Pod、描述故障、重启服务。
- AWS/Azure/Google Cloud MCP Servers:官方或参考实现,允许智能体通过自然语言检查和管理云资源。
- Cloudflare/Vercel MCP Servers:与各自生态系统紧密集成,便于检查部署状态、日志和运行时错误。
- Grafana MCP:连接智能体到指标看板,查询数据源并获取可视化快照以诊断性能问题。
- PagerDuty MCP Server:充当“值班”助手,自动获取事件详情、检查值班人员并触发解决流程。
- Sentry MCP Server:直接连接错误追踪系统,可检索生产环境中最频繁的错误及其堆栈跟踪。
开发与测试工具
- GitHub MCP Server:编码工作流核心,允许智能体读取文件、搜索仓库、管理分支和创建拉取请求。
- Postman MCP Server:运行和测试API集合,验证新部署的端点。
- Context7 MCP Server:专为技术文档优化的搜索工具,能查找最新的框架语法和编码模式。
- Playwright MCP:运行端到端测试或模拟用户浏览网页,验证UI更改或自动化无API的浏览器任务。
产品与业务运营
- Notion MCP Server:提供对团队维基的读写权限,可在产品需求与代码骨架间建立连接。
- Stripe MCP:理想用于调试计费问题,查询客户订阅或检查测试模式的失败交易。
- Jira MCP:连接项目管理与代码开发,允许智能体查找工单、记录工作并更新事务状态。
如何使用n8n高效编排MCP服务器?
成功的智能体系统需要将分散的MCP服务器编排成协同工作流。n8n提供了直观的环境来实现这种编排,桥接了自主执行与智能体推理。
通信基础:两种传输模式
MCP使用两种传输模式,混淆它们是集成失败的常见原因:
- Stdio:适用于Cursor、Claude Desktop等桌面应用的默认模式,要求AI智能体与服务器在同一物理机器上,不适用于云自动化。
- 可流式HTTP:自主智能体的首选。支持跨网络的无状态可靠连接,实现了智能体与工具的分离,是n8n与MCP服务器通信的标准方式。
连接方式
根据基础设施,有两种主要配置方式:
- Docker-Compose方案:自托管n8n时最稳健。将MCP服务器作为同一Docker网络中的容器运行,通过服务名直接通信,无需暴露端口到公网。
- 远程MCP服务器方案:使用n8n Cloud或连接第三方托管的MCP服务器时,通过标准Web请求进行通信,通常需要身份验证。
实战:构建自主工作流示例
我们构建一个将新闻通讯自动转化为博客创意的自主循环系统,涉及Gmail、GitHub MCP和Discord:
- 触发:Gmail监听节点过滤技术类新闻通讯邮件。
- 分析:LLM智能体分析邮件内容,判断是否与博客主题相关。
- 增强:若相关,则抓取邮件中的链接内容以丰富上下文。
- 人工介入:通过Discord节点请求人工批准。
- 执行:GitHub MCP服务器创建详细的任务议题并分配给Copilot。
- 记忆与审计:可选使用Notion MCP服务器将信息保存到数据库。
此工作流实现了从被动响应到主动任务创建与委派的跨越,智能体能在开发者休息时持续工作。
无现成MCP服务器?用n8n快速构建
若所需工具无现成MCP服务器(如Google Sheets),无需从头编写代码。利用n8n的MCP Server Trigger节点,可在5分钟内可视化地将任意API(如Google Sheets节点、Gmail节点)封装成自定义MCP服务器,并暴露给Claude或VSCode Copilot等应用使用。
总结与行动路线
我们始于“智能但孤立”的智能体困境,通过n8n编排MCP服务器,构建出了真正的自主智能体助手。它不再仅是工具访问器,而是能自主分析、决策和触发复杂工作流的系统。
下一步行动建议:
- 部署n8n:启动本地或云实例。
- 连接与扩展:将n8n智能体连接到MCP服务器,或探索n8n丰富的AI集成库以扩展能力。
- 构建与迭代:从零开始构建您的智能体循环,或参考n8n官方AI工作流模板进行逆向工程。
- 自定义集成:若列表中没有所需工具,使用n8n将任何内部或第三方API封装成自定义MCP服务器。
别再让智能体困于本地,现在就开始构建您的生产级AI自动化系统吧。
中文翻译:
模型上下文协议(MCP)在尝试部署之前,感觉就像魔法一样神奇。你将Claude连接到本地数据库,用自然语言提问,它就能立即执行复杂的SQL查询。但一旦你合上笔记本电脑,那个智能体就消失了。它无法回复客户邮件、按计划运行任务或触发警报。你强大的工具被局限在本地IDE中。
在本指南中,我们将打破这些壁垒。我们将分类介绍适用于编码、数据和运维的最佳MCP服务器,并展示如何用n8n编排它们。最终,你将获得一套精选工具集和一套方法,将临时对话转化为持久的自动化系统。
本指南专为理解大语言模型基础但希望构建生产级AI工作流的开发者优化。让我们开始吧!
我们如何筛选这份MCP服务器清单
MCP生态正在爆发式增长。如今在GitHub上搜索会找到数百个仓库,但许多只是实验性的“Hello World”实现或无人维护的业余项目。
为过滤噪音,我们依据严格标准评估了大量服务器。我们不仅关注星标数,更看重生产就绪度。
- 官方成熟实现:我们优先选择供应商自行维护的“官方”服务器(如Sentry或Stripe)。若无官方选项,则选择积极维护且采用度高的“成熟社区”项目,严格避开被弃置的短期项目。
- 架构稳定性(Docker对比原生):我们优先提供Docker实现的服务器(如PostgreSQL或Puppeteer服务器)。通过npx直接在主机运行复杂依赖很脆弱;容器化能确保服务器不受本地Node.js版本或操作系统库影响。
- 编排潜力:最后我们问:“这能扩展吗?”仅能在聊天窗口工作的服务器只是玩具。我们选择那些能暴露结构化工具的服务器,这些工具可通过n8n串联成更大的自动化工作流。
最适合开发者的20款MCP服务器有哪些?
在下方的列表中,你会看到两个关键标签。它们对你的配置意味着:
- Docker(最适合自托管):这些服务器包含Dockerfile或预构建镜像。这意味着你可以完全在自有基础设施(无论是VPS、家庭实验室还是私有云)上托管它们。你拥有数据、控制日志,且不完全依赖第三方。
- 远程:这是实现自动化的关键。它意味着服务器不局限于本地命令行,能暴露URL。这使得n8n等工具能通过网络连接到服务器,让你的工作流能“远程调用”这些工具,而无需安装在同一台机器上。这在n8n Cloud中让一切变得更简单,你只需插入远程服务器的URL,无需担心DNS或复杂网络配置。
数据与记忆
为你的智能体提供持久化存储和RAG能力。
PostgreSQL MCP(CrystalDBA开发)
网站:crystaldba/postgres-mcp
部署方式:Docker(自托管)
为何使用:与其让大语言模型编写需要你复制粘贴的查询,此服务器赋予智能体直接执行权限。它能检查模式并运行SELECT语句,即时回答关于数据的问题。
注:若使用Neon PostgreSQL,可查看官方Neon Remote MCP作为远程MCP选项。
Qdrant MCP服务器
网站:qdrant/mcp-server-qdrant
部署方式:Docker(自托管)
为何使用:这可以是你的RAG实现的向量存储。但由于它暴露了存储和检索信息的工具,它也能充当智能体的自主长期记忆。你的智能体能自主存储和检索“记忆”或技术上下文,避免对旧数据产生幻觉。
MongoDB MCP服务器
网站:mongodb-js/mongodb-mcp-server
部署方式:Docker(自托管)
为何使用:NoSQL数据的官方集成。它将自然语言问题转化为复杂的聚合管道,让智能体能查询非结构化数据,而你无需记住特定的操作符语法。
云与基础设施可观测性
无需离开工作流即可管理基础设施、Kubernetes集群、分析日志和警报。
Kubernetes MCP
网站:containers/kubernetes-mcp-server
部署方式:Docker(自托管)
为何使用:kubectl的封装,允许安全地与集群交互。你的智能体能安全地列出Pod、描述故障,甚至在开发/测试环境中重启服务。
AWS MCP
网站:awslabs/mcp
部署方式:Docker(自托管)、远程
为何使用:AWS的官方参考实现。它将各种AWS SDK能力暴露给智能体,允许直接从聊天界面进行资源检查和管理。
Azure MCP服务器
网站:Azure.Mcp.Server
部署方式:Docker(自托管)、远程
为何使用:微软官方的Azure资源管理实现。它使智能体能与Azure资源管理器(ARM)交互以审计和修改资源。
Google Cloud MCP服务器
网站:googleapis/gcloud-mcp
部署方式:仅限本地
为何使用:提供对Google Cloud资源的智能体访问。适用于通过自然语言列出计算实例、检查存储桶或管理IAM角色。
Cloudflare MCP服务器
网站:Cloudflare Agents
部署方式:远程
为何使用:允许智能体与Cloudflare Workers、KV和DNS设置交互。非常适合快速检查部署状态或清除缓存而无需登录仪表板。
Vercel MCP
网站:Vercel官方
部署方式:远程
为何使用:与Vercel生态系统紧密集成。它允许智能体检查部署日志和运行时错误。如果构建失败,智能体能拉取日志、分析错误并立即提出配置修复方案。
Grafana MCP
网站:grafana/mcp-grafana
部署方式:Docker(自托管)
为何使用:将你的智能体连接到指标和仪表板。它使智能体能查询数据源并检索可视化快照以诊断性能异常。
PagerDuty MCP服务器
网站:PagerDuty/pagerduty-mcp-server
部署方式:Docker(自托管)、远程
为何使用:终极“值班”助手。无需醒来手动检查警报,自动化智能体能获取事件详情、检查值班人员并自动触发解决工作流。
Sentry MCP服务器
网站:Sentry官方
部署方式:远程
为何使用:将你的智能体直接连接到错误追踪。你可以问“生产中最常见的错误是什么?”,智能体将检索堆栈跟踪、从GitHub读取对应文件并提出修复方案。
开发与测试工具
GitHub MCP服务器
网站:github/github-mcp-server
部署方式:Docker(自托管)、远程
为何使用:任何编码工作流都必不可少。它允许智能体读取文件内容、搜索仓库、管理分支和创建拉取请求,而无需本地git客户端。
Postman MCP服务器
网站:postmanlabs/postman-mcp-server
部署方式:Docker(自托管)、远程
为何使用:允许智能体运行和测试你的API集合。当你部署新端点时,智能体可通过执行现有Postman测试套件来验证其是否实际工作。
Context7 MCP服务器
网站:upstash/context7
部署方式:Docker(自托管)、远程
为何使用:专门针对技术文档优化的专业搜索工具。与通用网络搜索不同,它专为查找最新的框架语法和编码模式而调整,以馈入智能体的上下文窗口。
Playwright MCP
网站:microsoft/playwright-mcp
部署方式:Docker(自托管)
为何使用:使智能体能运行端到端测试或像用户一样浏览网页。适用于验证UI更改或自动化缺乏API的基于浏览器的任务。
产品与业务运营
Notion MCP服务器
网站:makenotion/notion-mcp-server
部署方式:Docker(自托管)、远程
为何使用:提供对团队维基的读写访问。智能体能读取Notion中的“产品需求”页面并生成相应的代码骨架,或将技术决策记录回维基。
Stripe MCP
网站:Stripe官方
部署方式:远程
为何使用:调试计费问题的理想选择。你可以在Stripe测试模式下查询客户订阅或检查失败交易,而无需登录仪表板界面。
Jira MCP
网站:atlassian/atlassian-mcp-server
部署方式:远程
为何使用:弥合项目管理和代码之间的鸿沟。它允许智能体在Jira Cloud中查找工单、记录工作并转换问题状态,在你编码时保持待办事项更新。
如何高效运行、管理和编排MCP(使用n8n)?
成功的智能体系统不仅需要一堆分散的工具;你必须将这些MCP服务器编排成连贯的工作流。
n8n提供了一个直接的环境来处理这种编排,有效弥合了自主执行(始终在线、基于触发的后台进程)与智能体AI(动态推理和工具利用)之间的差距。
在构建第一个流水线之前,我们必须建立底层通信层。
AI智能体如何与MCP服务器通信
模型上下文协议使用两种不同的传输方式,混淆它们是集成失败的最常见原因。
-
Stdio(本地上下文)
这是Cursor、Claude Desktop或GitHub Copilot等桌面应用的默认模式。其工作方式是:你将MCP服务器作为本地子进程启动(字面上运行类似npx run的命令)。问题在于它要求AI智能体和服务器必须在同一物理机器上。这使得它无法用于云自动化或后台工作器。 -
可流式HTTP
对于自主智能体,生态已转向可流式HTTP。它允许在任何网络(无论是内部Docker网络还是公共互联网)上进行无状态、可靠的连接。它将智能体与工具解耦。你的n8n实例可以在一个容器中,而PostgreSQL MCP服务器在另一个容器中,通过标准HTTP请求通信。
将MCP服务器连接到n8n:Docker或远程
根据你的基础设施,在n8n中使用MCP客户端工具节点有两种主要配置方式。
-
Docker-Compose设置
如果你自托管n8n,最稳健的方法是在同一Docker网络中将MCP服务器作为伴生容器运行。
因为Docker为同一桥接网络上的容器提供内置DNS解析,你无需将MCP服务器端口暴露到公共互联网。n8n可以直接使用Docker服务名与服务器通信。 -
远程MCP服务器设置
如果你使用n8n Cloud或连接到第三方托管的MCP服务器(如官方GitHub Copilot端点),架构依赖于标准Web请求。此方法通常需要某种身份验证。
在智能体自动化工作流中使用MCP服务器
我们将构建一个自主循环,利用n8n、GitHub MCP和Discord实现人机协同,将阅读新闻通讯转化为博客创意。这个智能体不再局限于聊天窗口,而是由传入邮件触发:
- Gmail监听器筛选技术新闻通讯。
- 基于大语言模型的智能体分析邮件正文,判断内容是否匹配作品集主题(例如技术博客文章或新技术观察列表等)。
- 如果在新闻通讯中找到与作品集网站相关的内容,我们通过抓取新闻通讯中的实际链接来增强智能体的上下文。
- Discord“人机协同”节点在采取行动前请求批准。
- GitHub MCP服务器创建详细问题并分配给Copilot。
首先,我们配置Gmail触发器节点。为避免噪音,我们将轮询时间设为“每小时”,并严格筛选标签为“新闻通讯”的邮件。
在系统消息中,我们定义角色及以下工作流指令:
- 它阅读新闻通讯并尝试为网站或博客寻找合适创意,生成10个想法。
- 通过Discord请求反馈,以挑选和改进想法。
- 想法获批后,从新闻通讯中的文章或文档链接获取与想法相关的额外内容。
- 然后规划具体详细的GitHub问题,包括验收标准,以便后续接手工作。
- 它可以将全部或部分问题委托给GitHub Copilot以开始实际实施。
- 使用Notion MCP,它可以读取或保存信息到Notion数据库,充当智能体的记忆和审计追踪。
接下来,使用MCP客户端工具节点,通过公共端点https://api.githubcopilot.com/mcp/连接到GitHub MCP服务器。选择HTTP流式传输作为传输方式,并通过Bearer令牌进行身份验证。
智能体将仅拥有MCP服务器提供的特定工具子集:assign_copilot_to_issue、add_issue_comment、issue_write和issue_read。因此它永远不会读取或编辑仓库中的实际代码,只会创建问题并将其委托给GitHub Copilot。
然后我们配置两种用户交互类型:
- 通过Discord批准——你将在Discord上收到消息和两个按钮,以便快速批准或拒绝操作。
- 请求反馈:与按钮不同,这允许文本输入。你可以回复具体指令或修正,引导智能体在提交任何代码前修订计划。
实现人机协同的主要功能是Discord节点中的“发送并等待响应”功能。
你还可以将其他n8n工作流作为工具添加到智能体,这可以带来一些创造性的组合方式。例如,我们有一个网页抓取n8n工作流,我们将其转化为工具供智能体使用,通过“调用n8n工作流工具”节点抓取网站并将内容用作上下文。
此时,我们可以开始测试这个工作流。它会抓取你收到的最新新闻通讯文章,你应该开始在Discord上看到交互:
在上面的示例中,AI智能体为我的作品集网站识别了一些创意,然后在该仓库上创建了GitHub问题,以便我审查、开始实施或分配给GitHub Copilot。
太棒了!
没有MCP服务器?没问题。
假设你希望AI智能体将研究笔记记录到Google Sheets,但在GitHub上找不到“Google Sheets MCP”。你无需用TypeScript编写新服务器。你可以在5分钟内在n8n中可视化构建它。
要在n8n中创建MCP服务器,我们使用MCP服务器触发器节点。然后我们可以暴露常规n8n节点,如Google Sheets节点或Gmail节点。
我们可以配置Google Sheets工具以允许从表格读取行,并配置Gmail工具以允许发送邮件。
“收件人”、“主题”和“消息”字段将由使用此MCP服务器的AI智能体自动填写。
现在你可以使用此MCP服务器的n8n URL,并将其添加到你喜欢的Copilot应用如Claude或VSCode Copilot中。在这个案例中,我让VSCode的Github Copilot能够提问并与Google Sheets中的新闻通讯内容交互,并赋予它转发或回复邮件的能力。现在你可以与你选择的AI智能体进行此类交互:
我们提示Copilot将Google Sheets中的消息转发到我的邮箱。你也可以要求它直接回复你的联系查询。
总结
我们以常见的挫败感开始本指南:“智能但孤立”的智能体。
通过使用n8n编排这些MCP服务器,我们突破了这一限制。我们不仅仅是给智能体“访问”工具的权限;我们构建了一个自主的智能体助手。
与其手动提示大语言模型检查邮件,我们构建了一个系统:它在你睡觉时醒来,阅读并分析邮件,然后创建任务并委托给GitHub Copilot。
下一步是什么?
别让这个架构停留在书签里。以下是本周末从“阅读”到“交付”的路线图:
- 部署n8n:让本地或云实例运行起来。
- 将n8n智能体连接到MCP服务器,或探索更广泛的n8n AI集成库以扩展智能体能力。
- 从头构建你的智能体循环,或从n8n AI工作流模板目录中逆向工程生产就绪的模式。
如果你在此列表中未找到所需工具,无需等待供应商。你可以使用n8n封装任何API(HubSpot、Salesforce、你的内部遗留应用),并将其暴露为自定义MCP服务器。
祝你自动化愉快!
英文来源:
The Model Context Protocol (MCP) feels like magic until you try to deploy it. You connect Claude to your local database, ask a question using natural language, and it executes complex SQL instantly. But the moment you close your laptop, that agent dies. It cannot react to customer emails, run on a schedule, or trigger alerts. Your powerful tools are trapped in your local IDE.
In this guide, we will break down these barriers. We will categorize the best MCP servers for coding, data, and ops, and then show you how to orchestrate them using n8n. By the end, you will have a curated toolkit and a method to turn temporary chats into persistent, automated systems.
This guide is optimized for developers who understand LLM basics but want to build production-grade AI workflows. Let's dive in!
How we composed this MCP server list
The MCP ecosystem is exploding. A search on GitHub today yields hundreds of repositories, but many are experimental "Hello World" implementations or unmaintained hobby projects.
To filter the noise, we evaluated lots of servers against strict criteria. We didn't just look for stars; we looked for production readiness.
- Official and mature implementations: We prioritized "Official" servers maintained by the vendors themselves (like Sentry or Stripe). When an official option wasn't available, we selected "proven community" projects with active maintenance and high adoption, strictly avoiding abandoned weekend projects.
- Architectural stability (Docker vs. raw): We prioritized servers that offer Docker implementations (like the PostgreSQL or Puppeteer servers). Running complex dependencies directly on your host machine via npx is fragile; containerization ensures the server works regardless of your local Node.js version or OS libraries.
- Orchestration potential: Finally, we asked: "Can this scale?" A server that only works in a chat window is a toy. We selected servers that expose structured tools capable of being chained together into larger, automated workflows using n8n.
What are the 20 best MCP servers for developers?
In the list below, you will see two key tags. Here is what they mean for your setup: - Docker (best for self-hosting): These servers include a Dockerfile or a pre-built image. This means you can host them entirely on your own infrastructure, whether that is a VPS, a home lab, or a private cloud. You own the data, you control the logs, and you don't entirely rely on a third-party.
- Remote: This tag is the key to automation. It means the server isn't stuck inside your local command line; it can expose a URL. This allows tools like n8n to connect to the server over the network, enabling your workflows to "reach out" and use these tools without them being installed on the same machine. This makes everything easier with n8n Cloud, as you can simply plug in the URL of a remote server without worrying about DNS or complex networking.
Data and memory
Give your agent persistent storage and RAG capabilities.
PostgreSQL MCP by CrystalDBA
Website: crystaldba/postgres-mcp
Deployment: Docker (self-hosted)
Why use it: Instead of asking an LLM to write a query that you have to copy-paste, this server gives the agent direct execution access. It can inspect schemas and run SELECT statements to answer questions about your data instantly.
Note: If you use Neon PostgreSQL, check out the Official Neon Remote MCP as a remote MCP option.
Qdrant MCP Server
Website: qdrant/mcp-server-qdrant
Deployment: Docker (self-hosted)
Why use it: This can be the vector store for your RAG implementation. But since it exposes tools to both store and retrieve information, it can also act as autonomous long-term memory for your agent. Your agent can autonomously store and retrieve "memories" or technical context, preventing it from hallucinating on older data.
MongoDB MCP Server
Website: mongodb-js/mongodb-mcp-server
Deployment: Docker (self-hosted)
Why use it: The official integration for NoSQL data. It translates natural language questions into complex aggregation pipelines, allowing the agent to query unstructured data without you needing to remember specific operator syntax.
Cloud and infrastructure observability
Manage infrastructure, Kubernetes clusters, analyze logs, and alerts without leaving your workflow.
Kubernetes MCP
Website: containers/kubernetes-mcp-server
Deployment: Docker (self-hosted)
Why use it: A wrapper around kubectl that allows safe interactions with your cluster. Your agent can list pods, describe failures, and even restart services in your Dev/Staging environment securely.
AWS MCP
Website: awslabs/mcp
Deployment: Docker (self-hosted), Remote
Why use it: The official reference implementation for AWS. It exposes various AWS SDK capabilities to the agent, allowing for resource inspection and management directly from your chat interface.
Azure MCP Server
Website: Azure.Mcp.Server
Deployment: Docker (self-hosted), Remote
Why use it: The official Microsoft implementation for Azure resource management. It enables the agent to interact with Azure Resource Manager (ARM) to audit and modify resources.
Google Cloud MCP Servers
Website: googleapis/gcloud-mcp
Deployment: Local only
Why use it: Provides agentic access to Google Cloud resources. Useful for listing compute instances, checking storage buckets, or managing IAM roles via natural language.
Cloudflare MCP Servers
Website: Cloudflare Agents
Deployment: Remote
Why use it: Allows the agent to interact with Cloudflare Workers, KV, and DNS settings. Ideal for quickly checking deployment statuses or clearing caches without logging into the dashboard.
Vercel MCP
Website: Vercel Official
Deployment: Remote
Why use it: Tightly coupled with the Vercel ecosystem. It allows the agent to inspect deployment logs and runtime errors. If a build fails, the agent can pull the logs, analyze the error, and propose a configuration fix immediately.
Grafana MCP
Website: grafana/mcp-grafana
Deployment: Docker (self-hosted)
Why use it: It connects your agent to your metrics and dashboards. It enables the agent to query data sources and retrieve visualization snapshots to diagnose performance anomalies.
PagerDuty MCP Server
Website: PagerDuty/pagerduty-mcp-server
Deployment: Docker (self-hosted), Remote
Why use it: The ultimate "On-Call" assistant. Instead of waking up to manually check an alert, an automated agent can fetch incident details, check who is on call, and trigger resolution workflows automatically.
Sentry MCP Server
Website: Sentry Official
Deployment: Remote
Why use it: It connects your agent directly to error tracking. You can ask "What is the most frequent error in production?" and the agent will retrieve the stack trace, read the corresponding file from GitHub, and propose a fix.
Development and testing tools
GitHub MCP Server
Website: github/github-mcp-server
Deployment: Docker (self-hosted), Remote
Why use it: Essential for any coding workflow. It allows the agent to read file contents, search repositories, manage branches, and create pull requests without needing a local git client.
Postman MCP Server
Website: postmanlabs/postman-mcp-server
Deployment: Docker (self-hosted), Remote
Why use it: Allows the agent to run and test your API collections. When you deploy a new endpoint, the agent can verify it actually works by executing the existing Postman test suite.
Context7 MCP Server
Website: upstash/context7
Deployment: Docker (self-hosted), Remote
Why use it: A specialized search tool optimized specifically for technical documentation. Unlike a generic web search, it is tuned to find the most up-to-date framework syntax and coding patterns to feed into the agent's context window.
Playwright MCP
Website: microsoft/playwright-mcp
Deployment: Docker (self-hosted)
Why use it: Enables the agent to run end-to-end tests or browse the web as a user. Useful for verifying UI changes or automating browser-based tasks that lack an API.
Product and business operations
Notion MCP Server
Website: makenotion/notion-mcp-server
Deployment: Docker (self-hosted), Remote
Why use it: Provides read/write access to team wikis. The agent can read a "Product Requirements" page in Notion and generate the corresponding code skeleton, or document technical decisions back into the wiki.
Stripe MCP
Website: Stripe Official
Deployment: Remote
Why use it: Ideal for debugging billing issues. You can query customer subscriptions or check failed transactions in Stripe's test mode without needing to log into the dashboard UI.
Jira MCP
Website: atlassian/atlassian-mcp-server
Deployment: Remote
Why use it: Bridges the gap between project management and code. It allows the agent to find tickets, log work, and transition issues in Jira Cloud, keeping the backlog updated as you code.
How to run, manage, and orchestrate MCPs efficiently (with n8n)?
A successful agentic system requires more than just a collection of disconnected tools; you must orchestrate these MCP servers into a cohesive workflow.
n8n provides a straightforward environment to handle this orchestration, effectively bridging the gap between autonomous execution (always-on, trigger-based background processes) and agentic AI (dynamic reasoning and tool utilisation).
Before we build our first pipeline, we must establish the underlying communication layer.
How AI agents communicate with MCP servers
The Model Context Protocol uses two distinct transport methods, and mixing them up is the most common reason for integration failure. - Stdio (local context)
This is the default mode for desktop applications like Cursor, Claude Desktop, or GitHub Copilot. The way this works is you start the MCP server as a local sub-process (literally running a command like npx run). The problem is that It requires the AI agent and the Server to be on the same physical machine. This makes it unusable for cloud automation or background workers. - Streamable HTTP
For autonomous agents, the ecosystem has shifted to Streamable HTTP. It allows for stateless, reliable connections over any network, whether that is your internal Docker network or the public internet. It decouples the agent from the tool. Your n8n instance can be in one container, and your PostgreSQL MCP server can be in another, communicating via standard HTTP requests.
Connect MCP servers to n8n: Docker or remote
Depending on your infrastructure, there are two primary ways to configure this in n8n using the MCP Client Tool node. - The Docker-Compose setup
If you are self-hosting n8n, the most robust approach is to run your MCP servers as companion containers within the same Docker network.
Because Docker provides built-in DNS resolution for containers on the same bridge network, you do not need to expose your MCP server ports to the public internet. n8n can communicate with the server directly using the Docker service name. - The remote MCP server setup
If you are using n8n Cloud or connecting to a third-party hosted MCP server (like the official GitHub Copilot endpoint), the architecture relies on standard web requests. This approach usually requires some sort of authentication.
Use MCP servers in your Agentic automation workflows
We are going to build an autonomous loop that turns reading newsletters into blog ideas using n8n with the GitHub MCP and Discord for human-in-the-loop. Instead of just sitting in a chat window, this agent is triggered by an incoming email: - A Gmail listener filters for technical newsletters.
- A LLM-powered agent analyzes the email body to decide if the content matches the portfolio’s topics, in this case, technical blog articles, or a watchlist for new technologies etc.
- If something relevant to the portfolio website is found in the newsletter, we enhance the context of the agent by scraping the actual links from the newsletter.
- A Discord "Human-in-the-loop" node requests approval before taking action.
- The GitHub MCP server creates detailed issues and assigns them to Copilot.
First, we configure the Gmail Trigger node. To avoid noise, we set the poll time to Every Hour and strictly filter for the label Newsletters.
In the System Message, we define the persona along with the following workflow instructions: - It reads the newsletter and tries to find fitting ideas for the website or the blog, generating 10 ideas.
- Asks for feedback via Discord, to pick and choose and improve the ideas.
- Once ideas are approved, it then fetches additional content relevant to the ideas, from the links to articles or documentation found in the newsletter.
- Then plans concrete and detailed GitHub issues including acceptance criteria so that this work can be picked up.
- It can delegate all or some of these issues to the GitHub Copilot to start the actual implementation.
- Using the Notion MCP, it can read or save information to a Notion database, acting like the agent’s memory and audit trail.
Next, using the MCP Client Tool node, let’s connect to the GitHub MCP Server using the public endpoint https://api.githubcopilot.com/mcp/
. Select HTTP Streamable as the transport and authenticate via Bearer token.
The agent will only have a specific subset of tools available from the MCP server: assign_copilot_to_issue
, add_issue_comment
, issue_write
and issue_read
. So it will never read or edit the actual code on a repository, it will just create issues and delegate them to the GitHub Copilot.
We then configure the two types of user interaction: - Approval via Discord - you will get a message on Discord, and two buttons so you can quickly approve or reject an action.
- Ask for feedback: instead of buttons, this allows text input. You can reply with specific instructions or corrections, guiding the agent to revise its plan before it commits any code.
The main functionality that allows for human-in-the-loop is the Send and Wait for Response function in the Discord node.
You can also add other n8n workflows as tools to your agent, which can lead to some creative ways of combining and composing agents. For example, we have a web scraping n8n workflow that we turned into a tool for the agent to use for scraping websites and use the content as context using the Call n8n Workflow Tool node.
At this point, we can start testing this workflow. It grabs the latest newsletter article you received, and you should start seeing interactions on Discord:
In the example above, the AI agent identified a few ideas for my portfolio website, then proceeded to create GitHub Issues on that repository, so I can review, start implementing, or assign them to GitHub Copilot.
Awesome!
No MCP server? No problem.
Let's say you want your AI agent to log research notes into a Google Sheet, but you can't find a "Google Sheets MCP" on GitHub. You don't need to write a new server in TypeScript. You can build it visually in n8n in 5 minutes.
To create an MCP Server in n8n, we use the MCP Server Trigger node. We can then expose regular n8n nodes such as Google Sheets node or Gmail node.
We can configure the Google Sheets tool to allow reading rows from a sheet and the Gmail tool to allow sending of emails.
The To, Subject and Message fields are to be filled automatically by the AI agent using this MCP server.
Now you can use the n8n URL for this MCP server and add it to your favourite Copilot app like Claude or VSCode Copilot. In this case I provided my Github Copilot from VSCode a way to ask questions and interact with my newsletter content from the google sheet and gave it the ability to forward or reply to an email. Now you can have this sort of interactions with your chosen AI agent:
We prompted the copilot to forward me the message from the Google Sheet to my email. You could also ask it to reply directly to your contact enquiries.
Wrap up
We started this guide with a common frustration: the "Smart but Siloed" agent.
Using n8n to orchestrate these MCP servers, we moved past that limitation. We didn't just give the agent "access" to tools; we built an autonomous agentic assistant.
Instead of manually prompting an LLM to check emails, we built a system that wakes up, reads and analyzes an email, then creates and delegates tasks to GitHub Copilot while you are asleep.
What’s next?
Don't let this architecture sit in your bookmarks. Here is your roadmap to move from "reading" to "shipping" this weekend: - Deploy n8n: Get your local or cloud instance running
- Connect n8n agents to MCP servers, or explore the wider library of n8n AI integrations to expand your agent's capabilities.
- Construct your agentic loop from scratch, or reverse-engineer production-ready patterns from the n8n AI workflow templates directory
If you didn't find the tool you need on this list, don't wait for a vendor. You can use n8n to wrap any API (HubSpot, Salesforce, your internal legacy app) and expose it as a custom MCP server.
Happy Automating!
文章标题:快来看,n8n更新了!20个最佳MCP服务器推荐:开发者如何构建自主代理工作流
文章链接:https://qimuai.cn/?post=3461
本站文章均为原创,未经授权请勿用于任何商业用途