Suyash Mohan

Using MCP tools with AI Agents in Go language

· Suyash Mohan

What makes AI Agents stand out is their capability to perform actions autonomously. For example, you can add a functionality for checking the weather and pass it to the AI Agent. The AI Agent can then decide if it should call this function depending on the user’s query. These behaviors are coded using function calling or tools. The OpenAI SDK allows you to define functions and pass them along with prompts. Then, the OpenAI API can respond, letting you know if it needs to call this function. Now, instead of developing such functions ourselves, we can utilize Model Context Protocol (MCP) servers.

Model Context Protocol (MCP) was introduced by Anthropic as a standard way of exposing context to an LLM client. MCP has become like the USB of AI applications. You can plug and play different MCP servers to your LLM Chat Client and expose multiple functionalities. For example, you can add an MCP server that can fetch real-time stock prices, search the internet, read your database, etc.

Since there is already a vast ecosystem, we can add a feature to our AI Agent to consume these MCP servers. This will allow us to utilize multiple available MCP servers instead of writing the functionality ourselves.

This is a follow-up to my series on developing an AI Agent using the Go language. Previously, we created an AI Agent that follows a system prompt and responds on Slack accordingly. Now, let’s give it superpowers by adding MCP servers.

Lucky for us, Go already has a great package that allows us to integrate MCP servers into our Agent. We will be using https://github.com/mark3labs/mcp-gomcp-go has features to create an MCP server and consume an MCP Server as a client. The library allows the consumption of MCP servers through two protocols: SSE (Server-Sent Events) and Stdio. SSE can be used if you want to connect to MCP servers running over a network, while Stdio can be used to connect to an MCP server running locally using Standard Input and Output. For security reasons and simplicity, we will focus only on the stdio method. Since we are running an MCP server written by someone else, to increase security even further, we will use Docker to run these MCP servers.

# Install mcp-go
go get github.com/mark3labs/mcp-go

# These will be executed automatically, you can skip these.
# Wikipedia mcp server container.
docker run -i --rm mcp/wikipedia-mcp
# DuckduckGo mcp server container
docker run -i --rm mcp/duckduckgo

The mcp-go package, when using stdio transport, will execute a command (which in our case will be Docker) and maintain stdout and stdin buffers. mcp-go takes care of all the details. We just need to create a new stdio transport object with the command to run. To give our application more customization, we will create a mcp.json file and read details from it regarding which MCP server to use. This will make it easy to change the MCP server without rebuilding our Go code.

// Sample mcp.json file. We are using duckduckgo and wikipedia mcp servers
{
    "mcpServers": {
      "duckduckgo": {
        "command": "docker",
        "args": [
          "run",
          "-i",
          "--rm",
          "mcp/duckduckgo"
        ]
      },
      "wikipedia-mcp": {
        "command": "docker",
        "args": [
          "run",
          "-i",
          "--rm",
          "mcp/wikipedia-mcp"
        ]
      }
    }
 }

The code below is for quick reference only and might contain errors. For actual code you should refer to my repo were working code was posted and the official documentation.

// Sample Go code to show how we initialised the MCP Client

import (
	"github.com/mark3labs/mcp-go/client"
	"github.com/mark3labs/mcp-go/mcp"
)

stdioClient, err := client.NewStdioMCPClient("docker", []string{}, "run", "-i", "--rm", "mcp/duckduckgo")
if err != nil {
	return nil, fmt.Errorf("failed to start transport for mcp server - %w", err)
}

ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()

initRequest := mcp.InitializeRequest{}
initRequest.Params.ProtocolVersion = mcp.LATEST_PROTOCOL_VERSION
initRequest.Params.ClientInfo = mcp.Implementation{
	Name:    "MCP-Go Client",
	Version: "0.1.0",
}
initRequest.Params.Capabilities = mcp.ClientCapabilities{}

_, err = stdioClient.Initialize(ctx, initRequest)
if err != nil {
	return nil, fmt.Errorf("failed to initialise mcp client - %w", err)
}

Once we have an MCP server running and a connection established with it, we can make requests to it. One of the most common requests is ListToolsListTools will fetch the list of all available tools from the MCP server. Similarly, we can use CallToolRequest to call one of the tools.

// Sample code to use ListTools

toolsRequest := mcp.ListToolsRequest{}
stdioClient.ListTools(context.Background(), toolsRequest)

// Sample code to use CallToolRequest
callToolReq := mcp.CallToolRequest{}
callToolReq.Params.Name = "search"
callToolReq.Params.Arguments = map[string]any{
	"query": "What is the current stock price for NVIDIA?",
}
callToolRes, err := stdioClient.CallTool(ctx, callToolReq)

We need to map the tools-related information to a format that our OpenAI SDK can understand. Then, we pass the details for the tools along with prompts to the OpenAI API calls. If the LLM decides to call one of the tools, it will return a special response. We then call the respective tool and return the output back to OpenAI. Now, OpenAI will be able to respond with the updated information.

# You can find the full source code below
https://github.com/suyashmohan/go-aiagent/tree/mcp

This was a simple attempt to integrate MCP servers into our AI Agents. Hope it was helpful.

Stay tuned—I’ll post updates and the full source code is available on GitHub!