llm-workshop/05-tool-use/tool_demo.py
Eric Furst cab2ebfd9d Reorder: tool use is now 05, neural networks is 06
The LLM arc completes at section 05 (agentic systems), with
neural networks as a standalone ML deep-dive in section 06.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-11 10:54:03 -04:00

65 lines
1.6 KiB
Python

# tool_demo.py
#
# Minimal example of Ollama tool calling.
# The model requests a function call; our code executes it.
#
# CHEG 667-013
# E. M. Furst
from ollama import chat
# Define a tool as a regular Python function.
# The type hints and docstring tell the model what the function does.
def add(a: int, b: int) -> int:
"""
Add two numbers together.
Args:
a: The first number
b: The second number
Returns:
The sum of the two numbers
"""
return a + b
# A lookup table so we can dispatch by name
tools = {
'add': add,
}
# Send a message with the tool available
messages = [{'role': 'user', 'content': 'What is 247 plus 863?'}]
response = chat(
'llama3.1:8b',
messages=messages,
tools=[add],
)
# The model responds with a tool call, not text
if response.message.tool_calls:
for tool_call in response.message.tool_calls:
name = tool_call.function.name
args = tool_call.function.arguments
print(f'Model wants to call: {name}({args})')
# Execute the function
result = tools[name](**args)
print(f'Result: {result}')
# Send the result back to the model
messages.append(response.message)
messages.append({
'role': 'tool',
'content': str(result),
'tool_name': name,
})
# Get the final natural-language response
final = chat('llama3.1:8b', messages=messages)
print(f'\nModel: {final.message.content}')
else:
# Model chose not to use a tool
print(f'Model: {response.message.content}')