Chapter 5 — MCP Tools¶
Setup Instructions¶
To ensure you have the required dependencies to run this notebook, you'll need to have our llm-agents-from-scratch framework installed on the running Jupyter kernel. To do this, you can launch this notebook with the following command while within the project's root directory:
uv run --with jupyter jupyter lab
Alternatively, if you just want to use the published version of llm-agents-from-scratch without local development, you can install it from PyPi by uncommenting the cell below.
# Uncomment the line below to install `llm-agents-from-scratch` from PyPi
# !pip install llm-agents-from-scratch
Running an Ollama service¶
To execute the code provided in this notebook, you’ll need to have Ollama installed on your local machine and have its LLM hosting service running. To download Ollama, follow the instructions found on this page: https://ollama.com/download. After downloading and installing Ollama, you can start a service by opening a terminal and running the command ollama serve.
The Hailstone MCP server¶
The examples in this notebook demonstrate how to use the framework's MCP integration using a toy MCP server that exposes the familiar Hailstone tool.
Important: Run this notebook from within the project's root directory.
The code for the MCP server is located at: https://github.com/nerdai/llm-agents-from-scratch/tree/main/extra/mcp-hailstone
Examples¶
Example 1: Creating an MCPToolProvider¶
from llm_agents_from_scratch.tools.mcp import MCPToolProvider
from mcp import StdioServerParameters
from pathlib import Path
server_path = Path.cwd().parent / "extra/mcp-hailstone"
server_params = StdioServerParameters(
command="uv",
args=["run", "--with", "mcp", "mcp", "run", "main.py"],
cwd=server_path,
)
provider = MCPToolProvider(
name="hailstone",
stdio_params=server_params,
)
provider
<llm_agents_from_scratch.tools.mcp.provider.MCPToolProvider at 0x7b13dabbc590>
Example 2: Establishing a session (connection) to Hailstone MCP server¶
import time
provider = MCPToolProvider(
name="hailstone",
stdio_params=server_params,
)
start = time.perf_counter()
await provider.session()
print(f"First call (establish): {time.perf_counter() - start:.6f}s")
start = time.perf_counter()
await provider.session()
print(f"Second call (cached): {time.perf_counter() - start:.6f}s")
First call (establish): 0.289514s Second call (cached): 0.000057s
Example 3: Discovering tools and creating MCPTool representations¶
provider = MCPToolProvider(
name="hailstone",
stdio_params=server_params,
)
tools = await provider.get_tools()
print(f"Number of tools discovered: {len(tools)}")
Number of tools discovered: 1
Example 4: Tearing down the session¶
provider = MCPToolProvider(
name="hailstone",
stdio_params=server_params,
)
await provider.session()
print(f"Session is None: {provider._session is None}")
print(f"Session is ready: {provider._session_ready.is_set()}")
print("-------------")
await provider.close()
print(f"Session is None: {provider._session is None}")
print(f"Session is ready: {provider._session_ready.is_set()}")
Session is None: False Session is ready: True ------------- Session is None: True Session is ready: False
Example 5: Printing the MCP Hailstone tool's attributes¶
provider = MCPToolProvider(
name="hailstone",
stdio_params=server_params,
)
tools = await provider.get_tools()
print(f"Tool name: {tools[0].name}")
print(f"Tool description: {tools[0].description}")
print(f"Tool parameters: {tools[0].parameters_json_schema}")
print(f"Provider reference: {tools[0].provider}")
print(f"Additional annotations: {tools[0].additional_annotations}")
Tool name: mcp__hailstone__hailstone_step_fn
Tool description: Performs a single step of the Hailstone sequence.
Tool parameters: {'properties': {'x': {'title': 'X', 'type': 'integer'}}, 'required': ['x'], 'title': 'hailstone_step_fnArguments', 'type': 'object'}
Provider reference: <llm_agents_from_scratch.tools.mcp.provider.MCPToolProvider object at 0x7b13dacde190>
Additional annotations: None
Example 6: MCP Hailstone tool execution¶
from llm_agents_from_scratch.data_structures import ToolCall
provider = MCPToolProvider(
name="hailstone",
stdio_params=server_params,
)
tools = await provider.get_tools()
tool_call = ToolCall(
tool_name=tools[0].name,
arguments={"x": 5},
)
result = await tools[0](tool_call)
result
ToolCallResult(tool_call_id='ad639f7b-8116-4bcf-9dbf-9941d01bdbb3', content=[{'type': 'text', 'text': '16', 'annotations': None}], error=False)
Example 7: Manual discovery of MCP tools to construct an LLM agent¶
from llm_agents_from_scratch.llms import OllamaLLM
from llm_agents_from_scratch import LLMAgent
provider = MCPToolProvider(
name="hailstone",
stdio_params=server_params,
)
tools = await provider.get_tools()
llm = OllamaLLM(model="qwen2.5:3b")
llm_agent = LLMAgent(
llm=llm,
tools=tools,
)
llm_agent.tools
[<llm_agents_from_scratch.tools.mcp.tool.MCPTool at 0x7b13d892eea0>]
Example 8: Different ways to construct an LLMAgentBuilder with the same attributes¶
from llm_agents_from_scratch.llms import OllamaLLM
from llm_agents_from_scratch.agent.builder import LLMAgentBuilder
provider = MCPToolProvider(
name="hailstone",
stdio_params=server_params,
)
llm = OllamaLLM(model="qwen2.5:3b")
# directly passing attributes at construction
builder = LLMAgentBuilder(
llm=llm,
mcp_providers=[provider],
)
# fluent style with builder methods
builder = LLMAgentBuilder()\
.with_llm(llm)\
.with_mcp_provider(provider)
# another fluent style
builder = LLMAgentBuilder()\
.with_mcp_providers([provider])\
.with_llm(llm)
Example 9: Using an LLMAgentBuilder to create the same LLMAgent from example 7¶
from llm_agents_from_scratch.llms import OllamaLLM
from llm_agents_from_scratch.agent.builder import LLMAgentBuilder
provider = MCPToolProvider(
name="hailstone",
stdio_params=server_params,
)
llm = OllamaLLM(model="qwen3:4b")
llm_agent = await LLMAgentBuilder()\
.with_llm(llm)\
.with_mcp_provider(provider)\
.build()
Example 10: Performing a hailstone step with our LLM agent¶
import logging
from llm_agents_from_scratch.logger import enable_console_logging
enable_console_logging(logging.INFO)
from llm_agents_from_scratch.data_structures.agent import Task
instruction = (
"What is hailstone_step_fn(5)? "
"You must call the tool to get the answer."
)
task = Task(instruction=instruction)
result = await llm_agent.run(task)
INFO (llm_agents_fs.LLMAgent) : 🚀 Starting task: What is hailstone_step_fn(5)? You must call the tool to get the answer.
INFO (llm_agents_fs.TaskHandler) : ⚙️ Processing Step: What is hailstone_step_fn(5)? You must call the tool to get the answer.
INFO (llm_agents_fs.TaskHandler) : 🛠️ Executing Tool Call: mcp__hailstone__hailstone_step_fn
INFO (llm_agents_fs.TaskHandler) : ✅ Successful Tool Call: [{'type': 'text', 'text': '16', 'annotations': None}]
INFO (llm_agents_fs.TaskHandler) : ✅ Step Result: Okay, let's see. The user asked for hailstone_step_fn(5). I called the tool with x=5 and got the response 16.
So the Hailstone sequenc...[TRUNCATED]
INFO (llm_agents_fs.TaskHandler) : 🧠 New Step: The assistant has already completed the tool call and received the result. The final answer is 16.
INFO (llm_agents_fs.TaskHandler) : ⚙️ Processing Step: The assistant has already completed the tool call and received the result. The final answer is 16.
INFO (llm_agents_fs.TaskHandler) : ✅ Step Result: Okay, let me check what's going on here. The user asked for hailstone_step_fn(5). The assistant already called the tool with x=5 and go...[TRUNCATED]
INFO (llm_agents_fs.TaskHandler) : No new step required.
INFO (llm_agents_fs.LLMAgent) : 🏁 Task completed: The answer to hailstone_step_fn(5) is **16**. This follows the Hailstone sequence rule where odd numbers $ x $ are transformed to $ ...[TRUNCATED]
result
TaskResult(task_id='cc267218-2e01-430f-b740-7bb1f9b50970', content='The answer to hailstone_step_fn(5) is **16**. This follows the Hailstone sequence rule where odd numbers $ x $ are transformed to $ 3x + 1 $. For $ x = 5 $: $ 3 \\times 5 + 1 = 16 $. No further tool calls are needed. Final answer: **16**.')