There’s something beautifully ironic about using a protocol from 1991 to power cutting-edge AI assistants in 2024. When I first started building gopher-mcp, people thought I was either nostalgic or slightly unhinged. But here’s the thing—sometimes the old ways of doing things teach us something profound about simplicity, focus, and what the internet could have been.

A Brief Journey Back to 1991

Picture this: the World Wide Web doesn’t exist yet. Tim Berners-Lee is still tinkering with his hypertext ideas at CERN. But at the University of Minnesota, a team led by Mark McCahill is solving the same problem in a completely different way. They create the Gopher protocol—a simple, hierarchical system for organizing and accessing information across networks.

For a brief, shining moment in the early 90s, Gopher was actually more popular than the web. It was faster, more organized, and frankly, easier to navigate. You could browse through information like walking through a well-organized library, with clear categories and subcategories leading you exactly where you needed to go.

Why Gopher Lost (And Why That Matters)

Gopher’s decline wasn’t about technical superiority—the web won for reasons that had little to do with the protocol itself:

  • Licensing concerns: The University of Minnesota’s unclear licensing stance scared away developers
  • Multimedia limitations: Gopher was text-focused in an era falling in love with images and multimedia
  • Simplicity vs. flexibility: The web’s chaotic flexibility beat Gopher’s organized simplicity

But here’s what’s fascinating: in our current era of information overload, ad-bloated websites, and JavaScript-heavy pages that take forever to load, Gopher’s minimalist approach feels almost prophetic.

The Gopher Renaissance

Fast-forward to today, and there’s a quiet renaissance happening. Gopher servers are popping up again, maintained by people who appreciate:

  • Content over presentation: No ads, no tracking, no JavaScript—just pure information
  • Speed: Gopher pages load instantly because they’re just text
  • Simplicity: The protocol is so simple you can implement a client in an afternoon
  • Focus: Without multimedia distractions, you actually read the content

This is where the Model Context Protocol comes in. What if AI assistants could browse this clean, focused corner of the internet? What if they could access information without wading through SEO spam and cookie banners?

What is the Model Context Protocol?

Before diving into Gopher specifics, let me explain MCP briefly. Think of it as a standardized bridge that lets AI models safely interact with external resources. Instead of each tool reinventing the wheel, MCP provides a consistent interface for everything from browsing protocols to accessing databases.

The brilliant part? It’s designed with security and extensibility in mind. Your AI assistant can browse Gopher servers, but it can’t accidentally delete your files or send spam emails.

Building the Gopher MCP Server

Protocol Abstraction Layer

The key insight I had while building gopher-mcp was that protocols like Gopher and Gemini (its modern spiritual successor) share common patterns. Instead of hardcoding Gopher-specific logic everywhere, I created an abstraction:

pub trait ProtocolHandler {
    async fn fetch(&self, url: &str) -> Result<ProtocolResponse, Error>;
    fn supports_url(&self, url: &str) -> bool;
}

pub struct GopherHandler;
pub struct GeminiHandler;

impl ProtocolHandler for GopherHandler {
    async fn fetch(&self, url: &str) -> Result<ProtocolResponse, Error> {
        let gopher_url = GopherUrl::parse(url)?;
        self.fetch_gopher(&gopher_url).await
    }
    
    fn supports_url(&self, url: &str) -> bool {
        url.starts_with("gopher://")
    }
}

This pattern turned out to be incredibly powerful. Adding support for Gemini was just a matter of implementing the trait—no need to touch the core MCP logic.

Gopher Protocol Implementation

The Gopher protocol is refreshingly simple. Here’s how a basic client works:

use tokio::net::TcpStream;
use tokio::io::{AsyncReadExt, AsyncWriteExt};

pub struct GopherClient;

impl GopherClient {
    pub async fn fetch(&self, url: &GopherUrl) -> Result<GopherResponse, Error> {
        let mut stream = TcpStream::connect((url.host.as_str(), url.port)).await?;
        
        // Send Gopher request (just the selector + CRLF)
        let request = format!("{}\r\n", url.selector);
        stream.write_all(request.as_bytes()).await?;
        
        // Read response
        let mut buffer = Vec::new();
        stream.read_to_end(&mut buffer).await?;
        
        Ok(GopherResponse::parse(buffer, url.item_type)?)
    }
}

That’s it. No HTTP headers, no status codes, no complex negotiation. You send a selector, you get back data. It’s almost zen-like in its simplicity.

Content Type Detection

Gopher uses a simple but effective type system that predates MIME types:

#[derive(Debug, Clone, Copy)]
pub enum GopherItemType {
    TextFile = b'0',
    Directory = b'1',
    PhoneBook = b'2',
    Error = b'3',
    BinHexFile = b'4',
    BinaryFile = b'9',
    Mirror = b'+',
    GifFile = b'g',
    ImageFile = b'I',
    // ... more types
}

impl GopherItemType {
    pub fn to_mime_type(self) -> &'static str {
        match self {
            Self::TextFile => "text/plain",
            Self::Directory => "text/gopher-menu",
            Self::BinaryFile => "application/octet-stream",
            Self::GifFile => "image/gif",
            Self::ImageFile => "image/jpeg",
            // ... more mappings
        }
    }
}

Practical Applications

Research and Documentation

One of the most compelling use cases I’ve discovered is research. Gopher servers often host high-quality, curated content:

  • Academic papers: Many universities maintain Gopher archives
  • Technical documentation: Clean, distraction-free technical docs
  • Historical archives: Digital libraries and historical collections

When your AI assistant can browse these resources, it’s accessing information that’s often more reliable and better curated than random web pages.

Development Workflows

Here’s a practical example of how I use the Gopher MCP server in my development workflow:

# AI assistant browsing Gopher for technical documentation
> Browse gopher://gopher.floodgap.com/1/world for information about protocol specifications

# AI assistant accessing university research archives
> Search gopher://gopher.umn.edu/ for papers on distributed systems

# AI assistant exploring historical computing resources
> Navigate to gopher://sdf.org/1/users/cat/gopher-history for protocol history

The AI gets clean, focused content without the noise of modern web advertising and tracking.

Architecture Patterns for Protocol Servers

Resource-Centric Design

Building a protocol MCP server taught me the importance of separating concerns:

#[derive(Debug, Clone)]
pub struct Resource {
    pub uri: String,
    pub name: String,
    pub description: Option<String>,
    pub mime_type: Option<String>,
}

pub trait ResourceProvider {
    async fn list_resources(&self) -> Result<Vec<Resource>, Error>;
    async fn read_resource(&self, uri: &str) -> Result<Vec<u8>, Error>;
}

This pattern lets you swap out protocol implementations without touching the MCP logic. Want to add support for Finger protocol? Just implement the trait.

Async-First Architecture

Protocol servers need to handle multiple concurrent requests efficiently:

use tokio::sync::RwLock;
use std::collections::HashMap;

pub struct CachedProtocolHandler {
    cache: RwLock<HashMap<String, CachedResponse>>,
    handler: Box<dyn ProtocolHandler + Send + Sync>,
}

impl CachedProtocolHandler {
    pub async fn fetch(&self, url: &str) -> Result<ProtocolResponse, Error> {
        // Check cache first
        {
            let cache = self.cache.read().await;
            if let Some(cached) = cache.get(url) {
                if !cached.is_expired() {
                    return Ok(cached.response.clone());
                }
            }
        }

        // Fetch and cache
        let response = self.handler.fetch(url).await?;
        let mut cache = self.cache.write().await;
        cache.insert(url.to_string(), CachedResponse::new(response.clone()));

        Ok(response)
    }
}

Best Practices for Protocol MCP Servers

Error Handling

Implement comprehensive error handling with context:

use thiserror::Error;

#[derive(Error, Debug)]
pub enum GopherError {
    #[error("Network error: {0}")]
    Network(#[from] std::io::Error),

    #[error("Invalid Gopher URL: {url}")]
    InvalidUrl { url: String },

    #[error("Server error: {message}")]
    ServerError { message: String },

    #[error("Timeout connecting to {host}:{port}")]
    Timeout { host: String, port: u16 },
}

Configuration Management

Keep configuration simple but flexible:

use serde::{Deserialize, Serialize};

#[derive(Debug, Deserialize, Serialize)]
pub struct GopherConfig {
    pub default_port: u16,
    pub timeout_seconds: u64,
    pub max_response_size: usize,
    pub cache_ttl_seconds: u64,
}

impl Default for GopherConfig {
    fn default() -> Self {
        Self {
            default_port: 70,
            timeout_seconds: 30,
            max_response_size: 1024 * 1024, // 1MB
            cache_ttl_seconds: 300, // 5 minutes
        }
    }
}

The Future of Alternative Protocols in AI

Building the Gopher MCP server opened my eyes to something interesting: there’s a whole ecosystem of alternative protocols that could benefit AI assistants:

  • Gemini: Gopher’s modern successor with TLS and markdown support
  • Finger: Simple user information protocol
  • NNTP: Network News Transfer Protocol for accessing Usenet
  • IRC: Real-time chat protocol integration

Each of these protocols represents a different approach to information sharing, and each could provide unique value to AI assistants.

What I Learned

Building gopher-mcp taught me that sometimes the old ways of doing things have wisdom we’ve forgotten. Gopher’s focus on content over presentation, its hierarchical organization, and its blazing speed are exactly what AI assistants need when browsing for information.

The protocol’s simplicity also made it an excellent learning platform for understanding MCP server architecture. If you’re new to building MCP servers, I’d recommend starting with a simple protocol like Gopher—you’ll learn the patterns without getting bogged down in complex protocol details.

Getting Started

Want to try the Gopher MCP server yourself? Here’s how to get started:

# Install the server
cargo install gopher-mcp

# Configure your AI assistant to use it
# (specific steps depend on your MCP client)

# Start exploring Gopher space
# Try gopher://gopher.floodgap.com/ for a good starting point

The Gopher internet is small but surprisingly rich. You’ll find everything from technical documentation to poetry, all presented in that clean, distraction-free format that makes information consumption a pleasure rather than a chore.


Interested in exploring more? Check out the gopher-mcp repository for complete implementation details and examples.