Quick News Spot

AWS Open-Sources an MCP Server for Bedrock AgentCore to Streamline AI Agent Development

By Asif Razzaq

AWS Open-Sources an MCP Server for Bedrock AgentCore to Streamline AI Agent Development

AWS released an open-source Model Context Protocol (MCP) server for Amazon Bedrock AgentCore, providing a direct path from natural-language prompts in agentic IDEs to deployable agents on AgentCore Runtime. The package ships with automated transformations, environment provisioning, and Gateway/tooling hooks designed to compress typical multi-step integration work into conversational commands.

The "AgentCore MCP server" exposes task-specific tools to a client (e.g., Kiro, Claude Code, Cursor, Amazon Q Developer CLI, or the VS Code Q plugin) and guides the assistant to: (1) minimally refactor an existing agent to the AgentCore Runtime model; (2) provision and configure the AWS environment (credentials, roles/permissions, ECR, config files); (3) wire up AgentCore Gateway for tool calls; and (4) invoke and test the deployed agent -- all from the IDE's chat surface.

Practically, the server teaches your coding assistant to convert entry points to AgentCore handlers, add imports, generate , and rewrite direct agent calls into payload-based handlers compatible with Runtime. It can then call the AgentCore CLI to deploy and exercise the agent, including end-to-end calls through Gateway tools.

AWS provides a one-click install flow from the GitHub repository, using a lightweight launcher () and a standard entry that most MCP-capable clients consume. The AWS team lists the expected locations for Kiro (), Cursor (), Amazon Q CLI (), and Claude Code ().

The repository sits in the awslabs "mcp" mono-repo (license Apache-2.0). While the AgentCore server directory hosts the implementation, the root repo also links to broader AWS MCP resources and documentation.

AWS recommends a layered approach to give the IDE's assistant progressively richer context: start with the agentic client, then add the AWS Documentation MCP Server, layer in framework documentation (e.g., Strands Agents, LangGraph), include the AgentCore and agent-framework SDK docs, and finally steer recurrent workflows via per-IDE "steering files." This arrangement reduces retrieval misses and helps the assistant plan the end-to-end transform/deploy/test loop without manual context switching.

Most "agent frameworks" still require developers to learn cloud-specific runtimes, credentials, role policies, registries, and deployment CLIs before any useful iteration. AWS's MCP server shifts that work into the IDE assistant and narrows the "prompt-to-production" gap. Since it's just another MCP server, it composes with existing doc servers (AWS service docs, Strands, LangGraph) and can ride improvements in MCP-aware clients, making it a low-friction entry point for teams standardizing on Bedrock AgentCore.

I like that AWS shipped a real MCP endpoint for AgentCore that my IDE can call directly. The -based config makes client hookup trivial (Cursor, Claude Code, Kiro, Amazon Q CLI), and the server's tooling maps cleanly onto the AgentCore Runtime/Gateway/Memory stack while preserving existing Strands/LangGraph code paths. Practically, this collapses the prompt→refactor→deploy→test loop into a reproducible, scriptable workflow rather than bespoke glue code.

Previous articleNext article

POPULAR CATEGORY

corporate

5433

entertainment

6636

research

3223

misc

6619

wellness

5446

athletics

6945