paint-brush
Google A2A - a First Look at Another Agent-agent Protocolby@zbruceli
New Story

Google A2A - a First Look at Another Agent-agent Protocol

by Bruce Li5mApril 10th, 2025
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

Google A2A - a first look at another agent-agent protocol and compared to Anthropic’s MCP. Well, they are very similar.

People Mentioned

Mention Thumbnail

Company Mentioned

Mention Thumbnail
featured image - Google A2A - a First Look at Another Agent-agent Protocol
Bruce Li HackerNoon profile picture
0-item
1-item

Today Google released its open source agent to agent protocol, imaginatively named A2A or Agent to Agent. Since we already see a lot of momentum behind Anthropic’s MCP (Model Context Protocol), Google claimed that A2A is complementary to MCP. They even used a heart emoji to drive home the point.


I’m not so sure, so I decided to take a deeper look and check what will be A2A’s position in the agentic universe. So we will cover how A2A works in real life, and a comparison with MCP.

Test drive A2A

Using A2A is surprisingly similar to MCP. You can run a few A2A agents/servers, and then the A2A client can connect to all of them. The good news is that typically you do not need to run the A2A agents along with the A2A client.

Running A2A agents (servers)

I spinned up all three example agents locally


  1. Google ADK agent that can submit expenses reports for you
  2. CrewAI agent that can find out generate an image
  3. LangGraph agent that can find out the latest foreign exchange rate


The way that an A2A server lets the world know its capabilities is through a “Agent Card” in JSON format. As an example the agent card for google ADK looks like this:


{

	"name": "Reimbursement Agent",

	"description": "This agent handles the reimbursement process for the employees given the amount and purpose of the reimbursement.",

	"url": "http://localhost:10002/",

	"version": "1.0.0",

	"capabilities": {

		"streaming": true,

		"pushNotifications": false,

		"stateTransitionHistory": false

	},

	"defaultInputModes": [

		"text",

		"text/plain"

	],

	"defaultOutputModes": [

		"text",

		"text/plain"

	],

	"skills": [

		{

			"id": "process_reimbursement",

			"name": "Process Reimbursement Tool",

			"description": "Helps with the reimbursement process for users given the amount and purpose of the reimbursement.",

			"tags": [

				"reimbursement"

			],

			"examples": [

				"Can you reimburse me $20 for my lunch with the clients?"

			]

		}

	]

}


Launch A2A Client demo app

Let’s continue with the client. The instructions to get the demo web app working are here. https://github.com/google/A2A/tree/main/demo


Once the web app is running, you can access it from your browser. The client looks a bit like the Gemini AI Studio with signature Google Material design.


URL: localhost:12000


First thing first, we need to add all the agents to the client by specifying their base URL. Since in my case I ran all the agents locally, their base URL were:


  • Google ADK
    • localhost:10002
  • crewAI
    • localhost:10001
  • LangGraph
    • Localhost:10000


Side note: within the protocol, the final URL looks a bit like this:

https://localhost:10002/.well-known/agent.json


Now you can see all three agents that are connected:

A2A agents


You can see the chat history here

A2A chats


All the event list

A2A event list


And all the task list

A2A task list


Settings is quite basic

A2A settings

Test Google ADK agent for expense claim

Google ADK Agent - expense claim

Test LangGraph for forex rate

LangGraph Agent - forex rate


Test CrewAI agent for image generation

CrewAI Agent - image generation

A combo test for multiple agents

I want to see if the A2A client can use multiple agents to achieve a single goal. So I tested if it can combine the expense claim agent with the forex rate agent. And it did work.


My task was to “claim for an expense for a beer in Germany while on a business trip, 5 euros, April 4 2025”. The conversation went through a few rounds of back and forth, and eventually got the right amount of US dollars in the expense claim form.



Initial Observations of A2A

I like that A2A is a pure Client-Server model that both can be run and hosted remotely. The client is not burdened with specifying and launching the agents/servers.


The agent configuration is fairly simple with just specifying the base URL, and the “Agent Card” takes care of the context exchange. And you can add and remove agents after the client is already launched.


At the current demo format, it is a bit difficult to understand how agents communicate with each other and accomplish complex tasks. The client calls each agent separately for different tasks, thus very much like multiple tool calling.

Compare A2A with MCP

Now I have tried out A2A, it is time to compare it with MCP which I wrote about earlier in this article.


While both A2A and MCP aim to improve AI agent system development, in theory they address distinct needs. A2A operates at the agent-to-agent level, focusing on interaction between independent entities, whereas MCP operates at the LLM level, focusing on enriching the context and capabilities of individual language models.


And to give a glimpse of their main similarity and differences according to their protocol documentation:

Feature

A2A

MCP

Primary Use Case

Agent-to-agent communication and collaboration

Providing context and tools (external API/SDK) to LLMs

Core Architecture

Client-server (agent-to-agent)

Client-host-server (application-LLM-external resource)

Standard Interface

JSON specification, Agent Card, Tasks, Messages, Artifacts

JSON-RPC 2.0, Resources, Tools, Memory, Prompts

Key Features

Multimodal, dynamic, secure collaboration, task management, capability discovery

Modularity, security boundaries, reusability of connectors, SDKs, tool discovery

Communication Protocol

HTTP, JSON-RPC, SSE

JSON-RPC 2.0 over stdio, HTTP with SSE (or streamable HTTP)

Performance Focus

Asynchronous communication for load handling

Efficient context management, parallel processing, caching for high throughput

Adoption & Community

Good initial industry support, nascent ecosystem

Substantial adoption from entire industry, fast growing community

Conclusions

Even though Google made it sound like A2A is a complimentary protocol to MCP, my first test shows they are overwhelmingly overlapping in purpose and features. They both address the needs of AI application developers to utilize multiple agents and tools to achieve complex goals. Right now, they both lack a good mechanism to register and discover other agents and tools without manual configuration.


MCP had an early start and already garnered tremendous support from both the developer community and large enterprises. A2A is very young, but already boasts strong initial support from many Google Cloud enterprise customers.


I believe this is great news for developers, since they will have more choices in open and standard agent-agent protocols. Only time can tell which will reign supreme, or they might even merge into a single standard.