Skip to content
Featured

Clarissa

Brings intelligent AI assistance to the command line with multi-model support and extensible tool execution

An AI-powered terminal assistant with tool execution capabilities, built with Bun and Ink, featuring streaming responses, MCP integration, and session persistence.

TypeScript Bun Ink React MCP OpenRouter
Screenshot of Clarissa

Architecture

flowchart LR
    subgraph Input
        A[User Message]
        B[Piped Content]
    end
    subgraph Clarissa
        C[Agent Loop]
        D[LLM Client]
        E[Tool Registry]
        F[Context Manager]
    end
    subgraph External
        G[OpenRouter API]
        H[MCP Servers]
    end
    A --> C
    B --> C
    C <--> D
    C <--> E
    C <--> F
    D <--> G
    E -.-> H

The Problem

Developers need AI assistance in terminal workflows, but existing solutions lack the ability to execute tools, persist context across sessions, and extend functionality through standardized protocols. Most terminal AI tools are limited to simple chat interfaces without the ability to take action on behalf of the user.

The Solution

Built an AI agent that implements the ReAct (Reasoning + Acting) pattern, enabling the LLM to reason about tasks and execute tools in an iterative loop. The agent features built-in tools for file operations, Git integration, and shell commands, with extensibility through the Model Context Protocol (MCP). Session persistence and a memory system allow context to carry across conversations.

The Results

  • Multi-model support via OpenRouter (Claude, GPT-4, Gemini, Llama, DeepSeek)
  • Real-time streaming responses with automatic context management
  • Session persistence and cross-session memory system
  • MCP integration for extensibility without code changes
14
Built-in Tools
10+
Supported Models
Up to 1M
Context Window
3
Install Methods

Clarissa is a command-line AI agent built with Bun and Ink. It provides a conversational interface powered by OpenRouter, enabling access to various LLMs like Claude, GPT-4, Gemini, and more. The agent can execute tools, manage files, run shell commands, and integrate with external services via the Model Context Protocol (MCP).

MCP Tool Explorer

clarissa
# Request
{
  "tool": "read_file",
  "path": "./src/agent.ts"
}
read_file

Read the contents of a file

This is a simulated demo. The actual MCP server processes requests from AI assistants like Claude.

Key Features

  • ReAct Agent Pattern: Implements the Reasoning + Acting loop for intelligent task execution
  • Multi-model Support: Switch between Claude, GPT-4, Gemini, Llama, and more via OpenRouter
  • Tool Execution: Built-in tools for files, Git, shell commands, and web fetching
  • MCP Integration: Extend with external tools through the Model Context Protocol
  • Session Persistence: Save and restore conversation history across sessions
  • Memory System: Remember facts across sessions with /remember and /memories
  • Context Management: Automatic token tracking and intelligent context truncation
  • Tool Confirmation: Approve or reject potentially dangerous operations

The ReAct Loop

The agent implements an iterative loop where it:

  1. Receives user input and sends it to the LLM with available tool definitions
  2. If the LLM responds with tool calls, executes them and feeds results back
  3. Repeats until the LLM provides a final answer without requesting tools

This pattern enables complex multi-step tasks while maintaining safety through tool confirmation.

Usage Modes

# Interactive mode
clarissa

# One-shot mode
clarissa "What files are in this directory?"

# Piped input
git diff | clarissa "Write a commit message for these changes"

Was this helpful?