Chapter 4 — The LLMAgent class¶
Setup Instructions¶
To ensure you have the required dependencies to run this notebook, you'll need to have our llm-agents-from-scratch framework installed on the running Jupyter kernel. To do this, you can launch this notebook with the following command while within the project's root directory:
uv run --with jupyter jupyter lab
Alternatively, if you just want to use the published version of llm-agents-from-scratch without local development, you can install it from PyPi by uncommenting the cell below.
# Uncomment the line below to install `llm-agents-from-scratch` from PyPi
# !pip install llm-agents-from-scratch
Running an Ollama service¶
To execute the code provided in this notebook, you’ll need to have Ollama installed on your local machine and have its LLM hosting service running. To download Ollama, follow the instructions found on this page: https://ollama.com/download. After downloading and installing Ollama, you can start a service by opening a terminal and running the command ollama serve.
Examples¶
Example 1: Instantiating an LLMAgent¶
from llm_agents_from_scratch.llms import OllamaLLM
from llm_agents_from_scratch import LLMAgent
llm = OllamaLLM(model="qwen2.5:3b")
llm_agent = LLMAgent(
llm=llm,
)
llm_agent.tools
[]
Example 2: Demo usage of add_tool()¶
from llm_agents_from_scratch.tools import SimpleFunctionTool
def add_one(x: int) -> int:
"""A dummy tool for adding one to the supplied number."""
return x + 1
tool = SimpleFunctionTool(func=add_one)
llm_agent.add_tool(tool)
llm_agent.tools
[<llm_agents_from_scratch.tools.simple_function.SimpleFunctionTool at 0x71089431c590>]
Example 3: The Hailstone LLM Agent¶
LOGGING_ENABLED = True
import logging
from llm_agents_from_scratch.logger import enable_console_logging
if LOGGING_ENABLED:
enable_console_logging(logging.INFO)
Define the Hailstone tool¶
This is an adapted version of the Hailstone tool from Chapter 2. Since LLMs have been pretrained on a corpus that includes information on the Hailstone sequence, they may rely on their parametric knowledge to perform the task rather than using the provided tool.
One way to force tool-calling is to obfuscate the function details and omit any mention of the Hailstone sequence. This ensures our demonstration shows the LLM agent actually using tools.
from pydantic import BaseModel
from llm_agents_from_scratch.tools import PydanticFunctionTool
class AlgoParams(BaseModel):
"""Params for next_number."""
x: int
def next_number(params: AlgoParams) -> int:
"""Generate the next number of the sequence."""
if params.x % 2 == 0:
return params.x // 2
return 3 * params.x + 1
# convert our Python function to a BaseTool
tool = PydanticFunctionTool(next_number)
Define our backbone LLM¶
from llm_agents_from_scratch.llms import OllamaLLM
llm = OllamaLLM(model="qwen2.5:3b")
Define the LLMAgent¶
from llm_agents_from_scratch import LLMAgent
llm_agent = LLMAgent(
llm=llm,
tools=[tool],
)
The Hailstone Task¶
from llm_agents_from_scratch.data_structures import Task
instruction_template = """
You are given a tool, `next_number`, that generates the next number in the
sequence given the current number.
Start with the number x={x}.
<rules>
CALL `next_number` on the current number x
STOP AND WAIT for the result.
REPEAT this step-by-step process until the number 1 is reached.
FINAL RESULT: When you receive the number 1, provide the complete sequence you
observed from start to finish (including the starting number x and ending number
1).
</rules>
<warnings>
NEVER fabricate or simulate tool call results
NEVER make multiple tool calls in one response
STOP and WAIT - ALWAYS wait for the actual tool response before deciding next
steps
</warnings>
""".strip()
Running the Task¶
number = 4
sequence = [4, 2, 1] # correct Hailstone sequence
task = Task(
instruction=instruction_template.format(x=number),
)
handler = llm_agent.run(task, max_steps=5)
INFO (llm_agents_fs.LLMAgent) : 🚀 Starting task: You are given a tool, `next_number`, that generates the next number in the sequence given the current number. Start with the number ...[TRUNCATED] INFO (llm_agents_fs.TaskHandler) : ⚙️ Processing Step: You are given a tool, `next_number`, that generates the next number in the sequence given the current number. Start with the numb...[TRUNCATED] INFO (llm_agents_fs.TaskHandler) : 🛠️ Executing Tool Call: next_number INFO (llm_agents_fs.TaskHandler) : ✅ Successful Tool Call: 2 INFO (llm_agents_fs.TaskHandler) : ✅ Step Result: The `next_number` function returned the number 2 when called with x=4. Now I will call the `next_number` function again, but this tim...[TRUNCATED] INFO (llm_agents_fs.TaskHandler) : 🧠 New Step: CALL `next_number` on the current number x=2 INFO (llm_agents_fs.TaskHandler) : ⚙️ Processing Step: CALL `next_number` on the current number x=2 INFO (llm_agents_fs.TaskHandler) : 🛠️ Executing Tool Call: next_number INFO (llm_agents_fs.TaskHandler) : ✅ Successful Tool Call: 1 INFO (llm_agents_fs.TaskHandler) : ✅ Step Result: The `next_number` function returned the number 1 when called with x=2. Now, I will provide the complete sequence observed from start ...[TRUNCATED] INFO (llm_agents_fs.TaskHandler) : No new step required. INFO (llm_agents_fs.LLMAgent) : 🏁 Task completed: The complete sequence observed from start to finish (including the starting number x and ending number 1) is as follows: 4 -> 2 -> ...[TRUNCATED]
handler.done()
True
# number of sub-steps taken
handler.step_counter
2
The TaskResult¶
Upon successful task execution, the final TaskResult object is set as the result for the TaskHandler (an asyncio.Future).
result = handler.result() if not handler.exception() else str(handler.exception())
print(result)
The complete sequence observed from start to finish (including the starting number x and ending number 1) is as follows: 4 -> 2 -> 1
# Alternative execution style (if you don't want/need the handler)
# result = await llm_agent.run(task, max_steps=5)
The Rollout¶
The rollout attribute of the TaskHandler sheds light on the steps that the LLMAgent took to perform its task.
print(handler.rollout)
=== Task Step Start ===
💬 assistant: My current instruction is 'You are given a tool, `next_number`, that generates the next number in the
sequence given the current number.
Start with the number x=4.
<rules>
CALL `next_number` on the current number x
STOP AND WAIT for the result.
REPEAT this step-by-step process until the number 1 is reached.
FINAL RESULT: When you receive the number 1, provide the complete sequence you
observed from start to finish (including the starting number x and ending number
1).
</rules>
<warnings>
NEVER fabricate or simulate tool call results
NEVER make multiple tool calls in one response
STOP and WAIT - ALWAYS wait for the actual tool response before deciding next
steps
</warnings>'
💬 assistant: I need to make the following tool call(s):
{
"id_": "211bc0e5-b633-4582-a1fd-1d61d4701f4e",
"tool_name": "next_number",
"arguments": {
"x": 4
}
}.
🔧 tool: {
"tool_call_id": "211bc0e5-b633-4582-a1fd-1d61d4701f4e",
"content": "2",
"error": false
}
💬 assistant: The `next_number` function returned the number 2 when called with x=4.
Now I will call the `next_number` function again, but this time with the new current number being 2.
I'll proceed to make my next tool call.
=== Task Step End ===
=== Task Step Start ===
💬 assistant: My current instruction is 'CALL `next_number` on the current number x=2'
💬 assistant: I need to make the following tool call(s):
{
"id_": "7d529c53-2da2-49c7-a538-abfa8217cd62",
"tool_name": "next_number",
"arguments": {
"x": 2
}
}.
🔧 tool: {
"tool_call_id": "7d529c53-2da2-49c7-a538-abfa8217cd62",
"content": "1",
"error": false
}
💬 assistant: The `next_number` function returned the number 1 when called with x=2.
Now, I will provide the complete sequence observed from start to finish: Starting at x = 4 and ending with x = 1.
Here is the sequence:
```
4 -> 2 -> 1
```
Since we received the final number 1 as requested by the instruction, no further calls are needed. The task can now be considered complete.
=== Task Step End ===