With and Without Skills — Word Frequency¶
This notebook is a continuation of
additional_resource_python.ipynb. It reuses the sameword-frequencyskill, so reading that notebook first is recommended.
This notebook illustrates the core value proposition of skills: reusability. Running a word-frequency task without a skill requires the caller to know the implementation details: the script path, that text must be piped as stdin, how to invoke the tool. A skill packages all of that knowledge once, so every future call is just run_with_skill("word-frequency", prompt=text) regardless of the input.
Note: An LLM backbone alone — without any tools — cannot perform exact word counting reliably. This notebook uses
TOOLS_FOR_SKILL_RESOURCESin both runs; the skill's role is not to add capability but to encode how to use the available tools correctly.
Setup Instructions¶
To ensure you have the required dependencies to run this notebook, you'll need to have our llm-agents-from-scratch framework installed on the running Jupyter kernel. To do this, you can launch this notebook with the following command while within the project's root directory:
uv run --with jupyter jupyter lab
Alternatively, if you just want to use the published version of llm-agents-from-scratch without local development, you can install it from PyPI by uncommenting the cell below.
# Uncomment the line below to install `llm-agents-from-scratch` from PyPI
# !pip install llm-agents-from-scratch
Running an Ollama service¶
To execute the code provided in this notebook, you'll need to have Ollama installed on your local machine and have its LLM hosting service running. To download Ollama, follow the instructions found on this page: https://ollama.com/download. After downloading and installing Ollama, you can start a service by opening a terminal and running the command ollama serve.
Setup¶
import logging
from llm_agents_from_scratch import TOOLS_FOR_SKILL_RESOURCES, LLMAgent
from llm_agents_from_scratch.data_structures import Task
from llm_agents_from_scratch.llms import OllamaLLM
from llm_agents_from_scratch.logger import enable_console_logging
enable_console_logging(logging.INFO)
llm = OllamaLLM(model="qwen3:14b", think=False)
PASSAGE = """\
Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Now is better than never.
Although never is often better than right now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea -- let's do more of those!
"""
Without Skill¶
The agent has TOOLS_FOR_SKILL_RESOURCES and is handed the script path directly in the prompt. It correctly calls from_scratch__python_interpreter with the passage as stdin, but only because the prompt spelled out exactly what to do. Every new use of this workflow requires the same prompt template as well as the additional python script from the caller.
from pathlib import Path
script_path = Path(
".agents/skills/word-frequency/scripts/word_freq.py",
).resolve()
task = Task(
instruction=(
"What are the top-10 most frequent words in the following passage?"
f" Use the script at {script_path} to compute the answer.\n\n{PASSAGE}"
),
)
agent_without = LLMAgent(llm=llm, tools=TOOLS_FOR_SKILL_RESOURCES)
result = await agent_without.run(task, skills_scopes=[], max_steps=5)
print(result)
INFO (llm_agents_fs.LLMAgent) : 🚀 Starting task: What are the top-10 most frequent words in the following passage? Use the script at /home/nerdai/Projects/llm-agents-from-scratch/mor...[TRUNCATED] INFO (llm_agents_fs.TaskHandler) : ⚙️ Processing Step: What are the top-10 most frequent words in the following passage? Use the script at /home/nerdai/Projects/llm-agents-from-scratch/...[TRUNCATED] INFO (llm_agents_fs.TaskHandler) : ✅ Step Result: I need to call the from_scratch__python_interpreter tool with the path to the word_freq.py script and the passage as the stdin input. INFO (llm_agents_fs.TaskHandler) : 🧠 New Step: Call the from_scratch__python_interpreter tool with the path to the word_freq.py script and the passage as the stdin input. INFO (llm_agents_fs.TaskHandler) : ⚙️ Processing Step: Call the from_scratch__python_interpreter tool with the path to the word_freq.py script and the passage as the stdin input. INFO (llm_agents_fs.TaskHandler) : 🛠️ Executing Tool Call: from_scratch__python_interpreter INFO (llm_agents_fs.TaskHandler) : ✅ Successful Tool Call: **Top-10 word frequencies** | Rank | Word | Count | |------|------|-------| | 1 | is | 10 | | 2 | better | 8 | | 3 | than | 8...[TRUNCATED] INFO (llm_agents_fs.TaskHandler) : ✅ Step Result: The top-10 most frequent words in the passage are as follows: 1. **is** - 10 occurrences 2. **better** - 8 occurrences 3. **than**...[TRUNCATED] INFO (llm_agents_fs.TaskHandler) : No new step required. INFO (llm_agents_fs.LLMAgent) : 🏁 Task completed: The top-10 most frequent words in the passage are as follows: 1. **is** - 10 occurrences 2. **better** - 8 occurrences 3. **tha...[TRUNCATED] The top-10 most frequent words in the passage are as follows: 1. **is** - 10 occurrences 2. **better** - 8 occurrences 3. **than** - 8 occurrences 4. **to** - 5 occurrences 5. **the** - 5 occurrences 6. **although** - 3 occurrences 7. **never** - 3 occurrences 8. **be** - 3 occurrences 9. **one** - 3 occurrences 10. **idea** - 3 occurrences Let me know if you need further analysis!
With Skill¶
With the word-frequency skill the caller doesn't need to know anything about the script or stdin. The skill encodes that knowledge — run_with_skill() is the entire interface.
agent_with_skill = LLMAgent(llm=llm, tools=TOOLS_FOR_SKILL_RESOURCES)
result = await agent_with_skill.run_with_skill(
"word-frequency",
prompt=PASSAGE,
max_steps=5,
)
print(result)
INFO (llm_agents_fs.LLMAgent) : 🚀 Starting task: This is a user-explicit skill activation. Call the from_scratch__use_skill tool with name='word-frequency'. Use exactly this name — i...[TRUNCATED] INFO (llm_agents_fs.TaskHandler) : ⚙️ Processing Step: This is a user-explicit skill activation. Call the from_scratch__use_skill tool with name='word-frequency'. Use exactly this name ...[TRUNCATED] INFO (llm_agents_fs.TaskHandler) : ✅ Step Result: I need to call the from_scratch__use_skill tool with name='word-frequency' to compute the top-10 most frequent words in the provided te...[TRUNCATED] INFO (llm_agents_fs.TaskHandler) : 🧠 New Step: Call the from_scratch__use_skill tool with name='word-frequency' to compute the top-10 most frequent words in the provided text passage. INFO (llm_agents_fs.TaskHandler) : ⚙️ Processing Step: Call the from_scratch__use_skill tool with name='word-frequency' to compute the top-10 most frequent words in the provided text pa...[TRUNCATED] INFO (llm_agents_fs.TaskHandler) : 🛠️ Executing Tool Call: from_scratch__use_skill INFO (llm_agents_fs.TaskHandler) : ✅ Successful Tool Call: <skill_content name="word-frequency"> # Word Frequency This skill counts word frequencies in a text passage provided by the u...[TRUNCATED] INFO (llm_agents_fs.TaskHandler) : ✅ Step Result: I need to execute the Python script located at `/home/nerdai/Projects/llm-agents-from-scratch/more-examples/ch06/.agents/skills/word-fr...[TRUNCATED] INFO (llm_agents_fs.TaskHandler) : 🧠 New Step: Execute the Python script located at `/home/nerdai/Projects/llm-agents-from-scratch/more-examples/ch06/.agents/skills/word-frequency/scrip...[TRUNCATED] INFO (llm_agents_fs.TaskHandler) : ⚙️ Processing Step: Execute the Python script located at `/home/nerdai/Projects/llm-agents-from-scratch/more-examples/ch06/.agents/skills/word-frequen...[TRUNCATED] INFO (llm_agents_fs.TaskHandler) : 🛠️ Executing Tool Call: from_scratch__python_interpreter INFO (llm_agents_fs.TaskHandler) : ✅ Successful Tool Call: **Top-10 word frequencies** | Rank | Word | Count | |------|------|-------| | 1 | is | 10 | | 2 | better | 8 | | 3 | than | 8...[TRUNCATED] INFO (llm_agents_fs.TaskHandler) : ✅ Step Result: The top-10 most frequent words in the provided text passage are as follows: **Top-10 word frequencies** | Rank | Word | Count | |--...[TRUNCATED] INFO (llm_agents_fs.TaskHandler) : No new step required. INFO (llm_agents_fs.LLMAgent) : 🏁 Task completed: The top-10 most frequent words in the provided text passage are as follows: **Top-10 word frequencies** | Rank | Word | Count | ...[TRUNCATED] The top-10 most frequent words in the provided text passage are as follows: **Top-10 word frequencies** | Rank | Word | Count | |------|--------|-------| | 1 | is | 10 | | 2 | better | 8 | | 3 | than | 8 | | 4 | to | 5 | | 5 | the | 5 | | 6 | although | 3 | | 7 | never | 3 | | 8 | be | 3 | | 9 | one | 3 | | 10 | idea | 3 |
Running on a Different Passage¶
The skill makes reuse trivial. Here is the same one-liner on a completely different text. No additional prompt engineering, no knowledge of script paths or stdin required.
PASSAGE_2 = """\
It was the best of times, it was the worst of times,
it was the age of wisdom, it was the age of foolishness,
it was the epoch of belief, it was the epoch of incredulity,
it was the season of Light, it was the season of Darkness,
it was the spring of hope, it was the winter of despair.
"""
result = await agent_with_skill.run_with_skill(
"word-frequency",
prompt=PASSAGE_2,
)
print(result)
INFO (llm_agents_fs.LLMAgent) : 🚀 Starting task: This is a user-explicit skill activation. Call the from_scratch__use_skill tool with name='word-frequency'. Use exactly this name — i...[TRUNCATED] INFO (llm_agents_fs.TaskHandler) : ⚙️ Processing Step: This is a user-explicit skill activation. Call the from_scratch__use_skill tool with name='word-frequency'. Use exactly this name ...[TRUNCATED] INFO (llm_agents_fs.TaskHandler) : 🛠️ Executing Tool Call: from_scratch__use_skill INFO (llm_agents_fs.TaskHandler) : ✅ Successful Tool Call: <skill_content name="word-frequency"> # Word Frequency This skill counts word frequencies in a text passage provided by the u...[TRUNCATED] INFO (llm_agents_fs.TaskHandler) : ✅ Step Result: I need to compute the top-10 most frequent words in the provided text passage. Let me execute the Python script to perform the word fre...[TRUNCATED] INFO (llm_agents_fs.TaskHandler) : 🧠 New Step: Execute the Python script located at '/home/nerdai/Projects/llm-agents-from-scratch/more-examples/ch06/.agents/skills/word-frequency/scrip...[TRUNCATED] INFO (llm_agents_fs.TaskHandler) : ⚙️ Processing Step: Execute the Python script located at '/home/nerdai/Projects/llm-agents-from-scratch/more-examples/ch06/.agents/skills/word-frequen...[TRUNCATED] INFO (llm_agents_fs.TaskHandler) : 🛠️ Executing Tool Call: from_scratch__python_interpreter INFO (llm_agents_fs.TaskHandler) : ✅ Successful Tool Call: **Top-10 word frequencies** | Rank | Word | Count | |------|------|-------| | 1 | it | 10 | | 2 | was | 10 | | 3 | the | 10 |...[TRUNCATED] INFO (llm_agents_fs.TaskHandler) : ✅ Step Result: The top-10 most frequent words in the provided text passage are as follows: **Top-10 word frequencies** | Rank | Word | Count | |--...[TRUNCATED] INFO (llm_agents_fs.TaskHandler) : No new step required. INFO (llm_agents_fs.LLMAgent) : 🏁 Task completed: The top-10 most frequent words in the provided text passage are as follows: **Top-10 word frequencies** | Rank | Word | Count | ...[TRUNCATED] The top-10 most frequent words in the provided text passage are as follows: **Top-10 word frequencies** | Rank | Word | Count | |------|--------|-------| | 1 | it | 10 | | 2 | was | 10 | | 3 | the | 10 | | 4 | of | 10 | | 5 | times | 2 | | 6 | age | 2 | | 7 | epoch | 2 | | 8 | season | 2 | | 9 | best | 1 | | 10 | worst | 1 | Let me know if you need further assistance!
Key Takeaway¶
A skill is a reusable, self-contained bundle and not just a capability. Without the word-frequency skill, the caller is responsible for managing two separate artifacts on their own:
- the prompt template (script path,
stdinconvention, instructions for the agent) - the Python script (
word_freq.py) that actually does the computation
Both must be stored, potentially versioned, and resupplied on every invocation. If the script moves or the invocation details change, every prompt that references it must be updated as well.
A skill bundles both artifacts together in a single directory (SKILL.md + scripts/). That bundle travels with the project, can be committed to version control, and is discovered automatically by any LLMAgent running in that directory. The same skill can be shared across team members, reused by different agent clients, and invoked identically by all of them with one line:
agent.run_with_skill("word-frequency", prompt=text)
On LLM backbone reliability: without
PythonInterpreterToola language model cannot perform exact word counting on raw text. LLMs are well known to miscount tokens and words. The skill doesn't add that capability;PythonInterpreterTooldoes. What the skill adds is the encoded knowledge of how to use that tool correctly, so no caller ever has to rediscover it.