Tools
MCP tools are server-side operations that can be executed by LLMs to perform actions like reading files, making API calls, or running calculations. This guide covers everything you need to know about working with MCP tools.
Table of contents
- Tool Discovery
- Tool Execution
- Human-in-the-Loop
- Streaming Responses
- Error Handling
- Tool Inspection
- Advanced Tool Usage
- Performance Considerations
- Next Steps
Tool Discovery
Listing Available Tools
Get all tools from an MCP server:
client = RubyLLM::MCP.client(
name: "filesystem",
transport_type: :stdio,
config: {
command: "npx",
args: ["@modelcontextprotocol/server-filesystem", "/path/to/directory"]
}
)
# Get all available tools
tools = client.tools
puts "Available tools:"
tools.each do |tool|
puts "- #{tool.name}: #{tool.description}"
# Show input schema
tool.input_schema["properties"]&.each do |param, schema|
required = tool.input_schema["required"]&.include?(param) ? " (required)" : ""
puts " - #{param}: #{schema['description']}#{required}"
end
end
Getting a Specific Tool
# Get a specific tool by name
file_tool = client.tool("read_file")
puts "Tool: #{file_tool.name}"
puts "Description: #{file_tool.description}"
puts "Input schema: #{file_tool.input_schema}"
Refreshing Tool Cache
Tools are cached to improve performance. Refresh when needed:
# Refresh all tools
tools = client.tools(refresh: true)
# Refresh a specific tool
tool = client.tool("read_file", refresh: true)
Tool Execution
Direct Tool Execution
Execute tools directly without using LLM:
# Execute a file reading tool
result = client.execute_tool(
name: "read_file",
parameters: {
path: "README.md"
}
)
puts "File contents: #{result}"
Structured Tool Output
Structured tool output was introduced in MCP Protocol 2025-06-18.
Tools can now specify output schemas for structured responses, enabling type-safe tool interactions:
# Tools with output schemas will automatically validate their structured content
tool = client.tool("data_analyzer")
result = tool.execute(data: "sample data")
# If the tool has an output schema, structured content is validated
# Invalid structured outputs will return an error
puts result # Returns validated structured content or error message
Checking Tool Schemas
Tools now support both input and output schemas:
tools = client.tools
tools.each do |tool|
puts "Tool: #{tool.name}"
# Input schema (parameters)
if tool.input_schema
puts " Input Schema: #{tool.input_schema}"
end
# Output schema (return value validation)
if tool.output_schema
puts " Output Schema: #{tool.output_schema}"
end
end
Validation Behavior
- Valid structured output: Returns the structured content
- Invalid structured output: Returns an error with validation details
- No output schema: Behaves as before (text-based output)
Example with Schema Validation
# A tool that returns structured data
weather_tool = client.tool("get_weather")
result = weather_tool.execute(location: "San Francisco")
# If the tool has an output schema, the result is validated
if result.is_a?(Hash) && result[:error]
puts "Tool validation failed: #{result[:error]}"
else
# Structured, validated output
puts "Temperature: #{result.temperature}"
puts "Humidity: #{result.humidity}"
end
Human-Friendly Display Names
Tools now support title fields for better user experience in MCP Protocol 2025-06-18+:
tools = client.tools
tools.each do |tool|
# Access display-friendly title if available
title = tool.annotations&.title || tool.name
puts "Tool: #{title} - #{tool.description}"
# Check if the tool has a human-friendly title
if tool.annotations&.title
puts " Display Name: #{tool.annotations.title}"
puts " Programmatic Name: #{tool.name}"
end
end
This separates programmatic identifiers from human-readable names for better UX.
Using Tools with RubyLLM
Integrate tools into LLM conversations:
# Add all tools to a chat
chat = RubyLLM.chat(model: "gpt-4")
chat.with_tools(*client.tools)
# Ask the LLM to use tools
response = chat.ask("Read the README.md file and summarize it")
puts response
Individual Tool Usage
Use specific tools in conversations:
# Get a specific tool
search_tool = client.tool("search_files")
# Add only this tool to the chat
chat = RubyLLM.chat(model: "gpt-4")
chat.with_tools(search_tool)
response = chat.ask("Search for all Ruby files in the project")
puts response
Human-in-the-Loop
Control tool execution with human approval:
Basic Human-in-the-Loop
Human-in-the-loop approvals are configured with handler classes:
class BasicApprovalHandler < RubyLLM::MCP::Handlers::HumanInTheLoopHandler
def execute
if tool_name == "delete_file"
deny("Deletion requires explicit approval")
else
approve
end
end
end
client.on_human_in_the_loop(BasicApprovalHandler)
Block-based human-in-the-loop callbacks are no longer supported.
Handler Classes
1.0+
Handler classes provide a powerful, testable way to handle approvals with async support.
Basic Handler Class
class SafeToolHandler < RubyLLM::MCP::Handlers::HumanInTheLoopHandler
option :safe_tools, default: ["read_file", "list_files"]
def execute
if options[:safe_tools].include?(tool_name)
approve
else
deny("Tool '#{tool_name}' requires explicit approval")
end
end
end
# Use globally
RubyLLM::MCP.configure do |config|
config.on_human_in_the_loop SafeToolHandler
end
# Or per-client
client.on_human_in_the_loop(SafeToolHandler, safe_tools: ["read_file", "list_files"])
Handler with Guards
class ParameterValidationHandler < RubyLLM::MCP::Handlers::HumanInTheLoopHandler
guard :check_parameters
guard :check_path_safety
def execute
# Logic here only runs if guards pass
approve
end
private
def check_parameters
return true unless parameters[:path]
return true unless parameters[:path].include?("..")
"Path traversal detected"
end
def check_path_safety
return true unless parameters[:path]
return true unless parameters[:path].start_with?("/etc")
"Access to system directories denied"
end
end
Async Approval via Websocket
Perfect for requiring real user approval via UI:
class WebsocketApprovalHandler < RubyLLM::MCP::Handlers::HumanInTheLoopHandler
async_execution timeout: 300 # 5 minutes
option :websocket_service, required: true
option :user_id, required: true
def execute
# Send approval request to user's browser/app
options[:websocket_service].broadcast(
"approvals_#{options[:user_id]}",
{
type: "approval_request",
id: approval_id,
tool: tool_name,
parameters: parameters
}
)
# Return deferred decision - approval happens later via registry
defer
end
end
# Configure handler
client.on_human_in_the_loop(
WebsocketApprovalHandler,
websocket_service: ActionCable.server,
user_id: current_user.id
)
# When user approves/denies via websocket:
class ApprovalsChannel < ApplicationCable::Channel
def approve(data)
RubyLLM::MCP::Handlers::HumanInTheLoopRegistry.approve(
data["approval_id"]
)
end
def deny(data)
RubyLLM::MCP::Handlers::HumanInTheLoopRegistry.deny(
data["approval_id"],
reason: data["reason"]
)
end
end
If no approval/denial arrives before timeout, the deferred approval is denied automatically and tool execution is cancelled.
Reusable Policy Handler
class PolicyApprovalHandler < RubyLLM::MCP::Handlers::HumanInTheLoopHandler
option :safe_tools, default: []
def execute
options[:safe_tools].include?(tool_name) ? approve : deny("Tool requires approval")
end
end
client.on_human_in_the_loop(
PolicyApprovalHandler,
safe_tools: ["read_file", "list_files"]
)
Return Contract
Human-in-the-loop handlers return structured decisions:
approve # => { status: :approved }
deny("Too dangerous") # => { status: :denied, reason: "Too dangerous" }
defer(timeout: 300) # => { status: :deferred, timeout: 300 }
Streaming Responses
Monitor tool execution in real-time:
chat = RubyLLM.chat(model: "gpt-4")
chat.with_tools(*client.tools)
chat.ask("Analyze all files in the project") do |chunk|
if chunk.tool_call?
chunk.tool_calls.each do |key, tool_call|
puts "🔧 Using tool: #{tool_call.name}"
puts " Parameters: #{tool_call.parameters}"
end
else
print chunk.content
end
end
Error Handling
Tool Execution Errors
begin
result = client.execute_tool(
name: "read_file",
parameters: { path: "/nonexistent/file.txt" }
)
rescue RubyLLM::MCP::Errors::ToolError => e
puts "Tool execution failed: #{e.message}"
puts "Error details: #{e.error_details}"
end
Tool Not Found
begin
tool = client.tool("nonexistent_tool")
rescue RubyLLM::MCP::Errors::ToolNotFound => e
puts "Tool not found: #{e.message}"
end
Timeout Errors
begin
result = client.execute_tool(
name: "slow_operation",
parameters: { size: "large" }
)
rescue RubyLLM::MCP::Errors::TimeoutError => e
puts "Tool execution timed out: #{e.message}"
end
Tool Inspection
Understanding Tool Schema
tool = client.tool("create_file")
# Basic information
puts "Name: #{tool.name}"
puts "Description: #{tool.description}"
# Input schema details
schema = tool.input_schema
puts "Required parameters: #{schema['required']}"
schema['properties'].each do |param, details|
puts "Parameter: #{param}"
puts " Type: #{details['type']}"
puts " Description: #{details['description']}"
puts " Required: #{schema['required']&.include?(param)}"
end
Tool Capabilities
# Check if tool supports specific features
tool = client.tool("search_files")
# Check parameter types
if tool.input_schema["properties"]["pattern"]
puts "Supports pattern matching"
end
if tool.input_schema["properties"]["recursive"]
puts "Supports recursive search"
end
Advanced Tool Usage
Tool Result Processing
# Process tool results
result = client.execute_tool(
name: "list_files",
parameters: { directory: "/path/to/search" }
)
# Parse JSON results
if result.is_a?(String) && result.start_with?("{")
parsed = JSON.parse(result)
puts "Found #{parsed['files'].length} files"
end
Chaining Tool Calls
# First tool call
files = client.execute_tool(
name: "list_files",
parameters: { directory: "/project" }
)
# Use result in second tool call
parsed_files = JSON.parse(files)
ruby_files = parsed_files["files"].select { |f| f.end_with?(".rb") }
ruby_files.each do |file|
content = client.execute_tool(
name: "read_file",
parameters: { path: file }
)
puts "File: #{file}"
puts content
end
Tool Composition
# Create a higher-level operation using multiple tools
def analyze_project(client)
# Get project structure
structure = client.execute_tool(
name: "list_files",
parameters: { directory: ".", recursive: true }
)
# Read important files
readme = client.execute_tool(
name: "read_file",
parameters: { path: "README.md" }
)
# Search for specific patterns
todos = client.execute_tool(
name: "search_files",
parameters: { pattern: "TODO", directory: "." }
)
{
structure: structure,
readme: readme,
todos: todos
}
end
# Use in a chat
analysis = analyze_project(client)
chat = RubyLLM.chat(model: "gpt-4")
chat.with_tools(*client.tools)
response = chat.ask("Based on this project analysis: #{analysis}, provide recommendations")
puts response
Performance Considerations
Tool Caching
# Cache expensive tool results
class ToolCache
def initialize(client)
@client = client
@cache = {}
end
def execute_tool(name, parameters)
key = "#{name}:#{parameters.hash}"
@cache[key] ||= @client.execute_tool(
name: name,
parameters: parameters
)
end
end
cached_client = ToolCache.new(client)
Batch Operations
# Process multiple files efficiently
files = ["file1.txt", "file2.txt", "file3.txt"]
contents = {}
files.each do |file|
contents[file] = client.execute_tool(
name: "read_file",
parameters: { path: file }
)
end
# Use all contents in a single chat
chat = RubyLLM.chat(model: "gpt-4")
chat.with_tools(*client.tools)
response = chat.ask("Analyze these files: #{contents}")
puts response
Next Steps
- Resources - Working with MCP resources
- Prompts - Using predefined prompts
- Notifications - Handling real-time updates