Getting Started

This guide covers the fundamentals of getting started with RubyLLM MCP, including installation, basic setup, and your first MCP client connection. This will expect you to have a basic knowleage of RubyLLM. If you want to fill in the gaps, you can read the RubyLLM Getting Started guide.

Table of contents

  1. Installation
    1. Prerequisites
    2. Installing the Gem
  2. Basic Setup
    1. Configure RubyLLM
    2. Your First MCP Client
  3. Basic Usage
    1. Using MCP Tools
    2. Manual Tool Execution
    3. Working with Resources
  4. Connection Management
    1. Manual Connection Control
    2. Health Checks
  5. Error Handling
  6. Next Steps
  7. Common Patterns
    1. Multiple Clients
    2. Combining Features

Installation

Prerequisites

  • Ruby 3.1.3 or higher
  • RubyLLM gem installed
  • An LLM provider API key (OpenAI, Anthropic, or Google)

Installing the Gem

Add RubyLLM MCP to your project:

bundle add ruby_llm-mcp

Or add to your Gemfile:

gem 'ruby_llm-mcp'

Then install:

bundle install

Basic Setup

Configure RubyLLM

First, configure RubyLLM with your preferred provider:

require 'ruby_llm/mcp'

# For OpenAI
RubyLLM.configure do |config|
  config.openai_api_key = "your-openai-key"
end

Your First MCP Client

Create a connection to an MCP server:

# Connect to a local MCP server via stdio
client = RubyLLM::MCP.client(
  name: "my-first-server",
  transport_type: :stdio,
  config: {
    command: "npx",
    args: ["@modelcontextprotocol/server-filesystem", "/path/to/directory"]
  }
)

# Check if the connection is alive
puts client.alive? # => true

Basic Usage

Using MCP Tools

MCP tools are automatically converted into RubyLLM-compatible tools:

# Get all available tools
tools = client.tools
puts "Available tools:"
tools.each do |tool|
  puts "- #{tool.name}: #{tool.description}"
end

# Use tools in a chat
chat = RubyLLM.chat(model: "gpt-4")
chat.with_tools(*client.tools)

response = chat.ask("List the files in the current directory")
puts response

Manual Tool Execution

You can also execute tools directly:

# Execute a specific tool
tool = client.tool("read_file")
result = tool.execute(path: "README.md")

puts result

Working with Resources

Resources provide static or dynamic data for conversations:

# Get available resources
resources = client.resources
puts "Available resources:"
resources.each do |resource|
  puts "- #{resource.name}: #{resource.description}"
end

# Use a resource in a chat
chat = RubyLLM.chat(model: "gpt-4")
chat.with_resource(client.resource("project_structure"))

response = chat.ask("What is the structure of this project?")
puts response

Connection Management

Manual Connection Control

You can control the connection lifecycle manually:

# Create a client without starting it
client = RubyLLM::MCP.client(
  name: "my-server",
  transport_type: :stdio,
  start: false,
  config: { command: "node", args: ["server.js"] }
)

# Start the connection
client.start

# Check if it's alive
puts client.alive? # => true

# Restart if needed
client.restart!

# Stop the connection
client.stop

Health Checks

Monitor your MCP server connection via ping:

# Ping the server to see if you can successful communicate with it the MCP server
if client.ping
  puts "Server is responsive"
else
  puts "Server is not responding"
end

# Check connection is still marked as alive
puts "Connection alive: #{client.alive?}"

Error Handling

Handle common errors when working with MCP:

begin
  client = RubyLLM::MCP.client(
    name: "my-server",
    transport_type: :stdio,
    config: {
      command: "nonexistent-command"
    }
  )
rescue RubyLLM::MCP::Errors::TransportError => e
  puts "Failed to connect: #{e.message}"
end

# Handle tool execution errors
begin
  result = client.execute_tool(
    name: "nonexistent_tool",
    parameters: {}
  )
rescue RubyLLM::MCP::Errors::ToolError => e
  puts "Tool error: #{e.message}"
end

Next Steps

Now that you have the basics down, explore these topics:

Common Patterns

Multiple Clients

Manage multiple MCP servers simultaneously:

# Create multiple clients
file_client = RubyLLM::MCP.client(
  name: "filesystem",
  transport_type: :stdio,
  config: {
    command: "npx",
    args: ["@modelcontextprotocol/server-filesystem", "/"]
  }
)

api_client = RubyLLM::MCP.client(
  name: "api-server",
  transport_type: :sse,
  config: {
    url: "https://api.example.com/mcp/sse"
  }
)

# Use tools from both clients
chat = RubyLLM.chat(model: "gpt-4")
chat.with_tools(*file_client.tools, *api_client.tools)

response = chat.ask("Read the config file and make an API call")
puts response

Combining Features

Use tools, resources, and prompts together:

chat = RubyLLM.chat(model: "gpt-4")

# Add tools for capabilities
chat.with_tools(*client.tools)

# Add resources for context
chat.with_resource(client.resource("project_overview"))

# Add prompts for guidance
chat.with_prompt(
  client.prompt("analysis_template"),
  arguments: { focus: "performance" }
)

response = chat.ask("Analyze the project")
puts response