Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

fsPulse
fsPulse

Introduction

Read-Only Guarantee. fsPulse never modifies your files. It requires only read access to the directories you configure for scanning. Write access is required only for fsPulse’s own database, configuration files, and logs — never for your data.

Local-Only Guarantee. fsPulse makes no outbound network requests. All functionality runs entirely on your local system, with no external dependencies or telemetry.

What is fsPulse?

fsPulse is a comprehensive filesystem monitoring and integrity tool that gives you complete visibility into your critical directories. Track your data as it grows and changes over time, detect unexpected modifications, and catch silent threats like bit rot and corruption before they become disasters. fsPulse provides continuous awareness through automated scanning, historical trend analysis, and a dedicated integrity view for reviewing issues.

Your filesystem is constantly evolving—files are added, modified, and deleted. Storage grows. But invisible problems hide beneath the surface: bit rot silently corrupts data, ransomware alters files while preserving timestamps, and you don’t realize directories have bloated.

fsPulse gives you continuous awareness of both the visible and invisible:

Monitor Change & Growth:

  • Track directory sizes and growth trends over time
  • Visualize file additions, modifications, and deletions
  • Understand what’s changing and when across all scans

Detect Integrity Issues:

  • Content Hashing (SHA2): Catches when file contents change even though metadata stays the same—the signature of bit rot or tampering
  • Format Validation: Reads and validates file structures to detect corruption in FLAC, JPEG, PNG, PDF, and more

Whether you’re managing storage capacity, tracking project evolution, or ensuring data integrity, fsPulse provides the visibility and peace of mind that comes from truly knowing the state of your data.

Key Capabilities

  • Health-at-a-Glance Overview — See the status of all monitored directories immediately: integrity issues, last scan times, and overall health
  • Continuous Monitoring — Schedule recurring scans (daily, weekly, monthly, or custom intervals) to track your filesystem automatically
  • Temporal Versioning — Every item’s state is tracked over time; browse your filesystem as it appeared at any past scan
  • Size & Growth Tracking — Monitor directory sizes and visualize storage trends over time with dual-format units
  • Change Detection — Track all file additions, modifications, and deletions through version history
  • Integrity Verification — SHA2 hashing detects bit rot and tampering; format validators catch corruption in supported file types
  • Integrity Management — Dedicated Integrity page surfaces suspect hashes and validation failures with review tracking
  • Historical Trends — Interactive trend charts show how your data evolves: sizes, counts, changes, and integrity metrics
  • Powerful Query Language — SQL-inspired syntax for filtering, sorting, and analyzing across four data domains
  • Web-First Design — Elegant web UI for all operations including scanning, browsing, querying, and configuration

Running fsPulse

fsPulse is a web-first application. Start the server and access all functionality through your browser:

fspulse

Then open http://localhost:8080 in your browser to access the web interface.

The web UI provides complete functionality for managing roots, scheduling and monitoring scans, browsing your filesystem data, running queries, and reviewing integrity issues. Configuration is done through environment variables or a config file—see Configuration for details.

fsPulse is designed to scale across large file systems while maintaining clarity and control for the user.


This book provides comprehensive documentation for all aspects of fsPulse. Start with Getting Started for installation, or jump to any section that interests you.

Getting Started

fsPulse can be installed in one of four ways:

  1. Run with Docker (Recommended)
  2. Install via crates.io
  3. Clone and build from source
  4. Download a pre-built release binary from GitHub

Choose the method that works best for your platform and preferences.


The easiest way to run fsPulse is with Docker:

docker pull gtunesdev/fspulse:latest

docker run -d \
  --name fspulse \
  -p 8080:8080 \
  -v fspulse-data:/data \
  gtunesdev/fspulse:latest

Access the web UI at http://localhost:8080

The web UI provides full functionality: managing roots, initiating scans, querying data, and viewing results—all from your browser.

See the Docker Deployment chapter for complete documentation including:

  • Volume management for scanning host directories
  • Configuration options
  • Docker Compose examples
  • NAS deployment (TrueNAS, Unraid)
  • Troubleshooting

2. Install via Crates.io

The easiest way to get fsPulse is via crates.io:

cargo install fspulse

This will download, compile, and install the latest version of fsPulse into Cargo’s bin directory, typically ~/.cargo/bin. That directory is usually already in your PATH. If it’s not, you may need to add it manually.

Then run:

fspulse --help

To upgrade to the latest version later:

cargo install fspulse --force

3. Clone and Build from Source

If you prefer working directly with the source code (for example, to contribute or try out development versions):

git clone https://github.com/gtunes-dev/fspulse.git
cd fspulse
cargo build --release

Then run it from the release build directory:

./target/release/fspulse --help

4. Download Pre-Built Release Binaries

Pre-built release binaries for Linux, macOS, and Windows are available on the GitHub Releases page:

  1. Visit the releases page.
  2. Download the appropriate archive for your operating system.
  3. Unpack the archive.
  4. Optionally move the fspulse binary to a directory included in your PATH.

For example, on Unix systems:

mv fspulse /usr/local/bin/

Then confirm it’s working:

fspulse --help

Running fsPulse

After installation, start the fsPulse server:

fspulse

Or explicitly:

fspulse serve

Then open your browser to http://localhost:8080 to access the web interface.

fsPulse is a web-first application. All functionality is available through the web UI:

  • Health overview with root status and integrity summaries
  • Root and schedule management
  • Interactive browsing with tree, folder, and search views
  • Point-in-time filesystem snapshots and comparison
  • Integrity page for reviewing suspect hashes and validation failures
  • Powerful query interface across four data domains
  • Configuration management

Configuration

fsPulse is configured through environment variables or a config file, not command-line flags:

# Example: Change port and enable debug logging
export FSPULSE_SERVER_PORT=9090
export FSPULSE_LOGGING_FSPULSE=debug
fspulse

See Configuration for all available settings and the Command-Line Interface page for more details.

Installation

fsPulse can be installed in several ways depending on your preferences and environment.

Pull the official image and run:

docker pull gtunesdev/fspulse:latest
docker run -d --name fspulse -p 8080:8080 -v fspulse-data:/data gtunesdev/fspulse:latest

Multi-architecture support: linux/amd64, linux/arm64

See Docker Deployment for complete instructions.

Cargo (crates.io)

Install via Rust’s package manager:

cargo install fspulse

Requires Rust toolchain installed on your system.

Pre-built Binaries

Download platform-specific binaries from GitHub Releases.

Available for: Linux, macOS, Windows

macOS builds include both Intel (x86_64) and Apple Silicon (ARM64) binaries.

Note: All web UI assets are embedded in the binary—no external files or dependencies required.

Next Steps

Building from Source

This guide covers building fsPulse from source code, which is useful for development, customization, or running on platforms without pre-built binaries.

Prerequisites

Before building fsPulse, ensure you have the following installed:

Required Tools

  1. Rust (latest stable version)

    • Install via rustup
    • Verify: cargo --version
  2. Node.js (v18 or later) with npm

    • Install from nodejs.org
    • Verify: node --version and npm --version

Platform-Specific Requirements

Windows:

  • Visual Studio Build Tools or Visual Studio with C++ development tools
  • Required for SQLite compilation

Linux:

  • Build essentials: build-essential (Ubuntu/Debian) or base-devel (Arch)
  • May need pkg-config and libsqlite3-dev depending on distribution

macOS:

  • Xcode Command Line Tools: xcode-select --install

The easiest way to build fsPulse is using the provided build script:

git clone https://github.com/gtunes-dev/fspulse.git
cd fspulse
./scripts/build.sh

The script will:

  1. Check for required tools
  2. Install frontend dependencies
  3. Build the React frontend
  4. Compile the Rust binary with embedded assets

The resulting binary will be at: ./target/release/fspulse

Manual Build

If you prefer to run each step manually or need more control:

Step 1: Clone the Repository

git clone https://github.com/gtunes-dev/fspulse.git
cd fspulse

Step 2: Build the Frontend

cd frontend
npm install
npm run build
cd ..

This creates the frontend/dist/ directory containing the compiled React application.

Step 3: Build the Rust Binary

cargo build --release

The binary will be at: ./target/release/fspulse

Important: The frontend must be built before compiling the Rust binary. The web UI assets are embedded into the binary at compile time via RustEmbed. If frontend/dist/ doesn’t exist, the build will fail with a helpful error message.

Development Builds

For development, you can skip the release optimization:

# Frontend (development mode with hot reload)
cd frontend
npm run dev

# Backend (in a separate terminal)
cargo run -- serve

In development mode, the backend serves frontend files directly from frontend/dist/ rather than using embedded assets, allowing for faster iteration.

Troubleshooting

“Frontend assets not found”

Error:

❌ ERROR: Frontend assets not found at 'frontend/dist/'

Solution: Build the frontend first:

cd frontend
npm install
npm run build
cd ..

Windows: “link.exe not found”

Error: Missing Visual Studio Build Tools

Solution: Install Visual Studio Build Tools with C++ development tools from visualstudio.microsoft.com

Linux: “cannot find -lsqlite3”

Error: Missing SQLite development libraries

Solution: Install platform-specific package:

  • Ubuntu/Debian: sudo apt-get install libsqlite3-dev
  • Fedora: sudo dnf install sqlite-devel
  • Arch: sudo pacman -S sqlite

npm install fails

Error: Network or permission issues with npm

Solution:

  • Clear npm cache: npm cache clean --force
  • Check Node.js version: node --version (should be v18+)
  • Try with sudo (not recommended) or fix npm permissions

Running Your Build

After building, run fsPulse:

./target/release/fspulse --help
./target/release/fspulse serve

Access the web UI at: http://localhost:8080

Next Steps

Docker Deployment

The easiest way to run fsPulse is with Docker. The container runs fsPulse as a background service with the web UI accessible on port 8080. You can manage roots, initiate scans, query data, and view results—all from your browser.


Quick Start

Get fsPulse running in three simple steps:

# 1. Pull the image
docker pull gtunesdev/fspulse:latest

# 2. Run the container
docker run -d \
  --name fspulse \
  -p 8080:8080 \
  -v fspulse-data:/data \
  gtunesdev/fspulse:latest

# 3. Access the web UI
open http://localhost:8080

That’s it! The web UI is now running.

This basic setup stores all fsPulse data (database, config, logs) in a Docker volume and uses default settings. If you need to customize settings (like running as a specific user for NAS deployments, or changing the port), see the Configuration and NAS Deployments sections below.


Scanning Your Files

To scan directories on your host machine, you need to mount them into the container. fsPulse can then scan these mounted paths.

Mounting Directories

Add -v flags to mount host directories into the container. We recommend mounting them under /roots for clarity:

docker run -d \
  --name fspulse \
  -p 8080:8080 \
  -v fspulse-data:/data \
  -v ~/Documents:/roots/documents:ro \
  -v ~/Photos:/roots/photos:ro \
  gtunesdev/fspulse:latest

The :ro (read-only) flag is recommended for safety—fsPulse only reads files during scans and never modifies them.

Creating Roots in the Web UI

After mounting directories:

  1. Open http://localhost:8080 in your browser
  2. Navigate to Roots in the sidebar
  3. Click Add Root
  4. Enter the container path: /roots/documents (not the host path ~/Documents)
  5. Click Create Root

Important: Always use the container path (e.g., /roots/documents), not the host path. The container doesn’t know about host paths.

Once roots are created, you can scan them from the web UI and monitor progress in real-time.


For persistent deployments, Docker Compose is cleaner and easier to manage:

version: '3.8'

services:
  fspulse:
    image: gtunesdev/fspulse:latest
    container_name: fspulse
    restart: unless-stopped
    ports:
      - "8080:8080"
    volumes:
      # Persistent data storage - REQUIRED
      # Must map /data to either a Docker volume (shown here) or a host path
      # Must support read/write access for database, config, and logs
      - fspulse-data:/data

      # Alternative: use a host path instead
      # - /path/on/host/fspulse-data:/data

      # Directories to scan (read-only recommended for safety)
      - ~/Documents:/roots/documents:ro
      - ~/Photos:/roots/photos:ro
    environment:
      # Optional: override any configuration setting
      # See Configuration section below and https://gtunes-dev.github.io/fspulse/configuration.html
      - TZ=America/New_York

volumes:
  fspulse-data:

Save as docker-compose.yml and run:

docker-compose up -d

Configuration

fsPulse creates a default config.toml on first run with sensible defaults. Most users won’t need to change anything, but when you do, there are three ways to customize settings.

Option 1: Use Environment Variables (Easiest)

Override any setting using environment variables. This works with both docker run and Docker Compose.

Docker Compose example:

services:
  fspulse:
    image: gtunesdev/fspulse:latest
    environment:
      - FSPULSE_SERVER_PORT=9090      # Change web UI port
      - FSPULSE_LOGGING_FSPULSE=debug # Enable debug logging
      - FSPULSE_ANALYSIS_THREADS=16   # Use 16 analysis threads
    ports:
      - "9090:9090"

Command line example (equivalent to above):

docker run -d \
  --name fspulse \
  -p 9090:9090 \
  -e FSPULSE_SERVER_PORT=9090 \
  -e FSPULSE_LOGGING_FSPULSE=debug \
  -e FSPULSE_ANALYSIS_THREADS=16 \
  -v fspulse-data:/data \
  gtunesdev/fspulse:latest

Environment variables follow the pattern FSPULSE_<SECTION>_<FIELD> and override any settings in config.toml. See the Configuration chapter for a complete list of available variables and their purposes.

Option 2: Edit the Config File

If you prefer editing the config file directly:

  1. Extract the auto-generated config:

    docker exec fspulse cat /data/config.toml > config.toml
    
  2. Edit config.toml with your preferred settings

  3. Copy back and restart:

    docker cp config.toml fspulse:/data/config.toml
    docker restart fspulse
    

Option 3: Pre-Mount Your Own Config (Advanced)

If you want custom settings before first launch, create your own config.toml and mount it:

volumes:
  - fspulse-data:/data
  - ./my-config.toml:/data/config.toml:ro

Most users should start with Option 1 (environment variables) or Option 2 (edit after first run).


NAS Deployments (TrueNAS, Unraid)

NAS systems often have specific user IDs for file ownership. By default, fsPulse runs as user 1000, but you may need it to match your file ownership.

Setting User and Group IDs

Use PUID and PGID environment variables to run fsPulse as a specific user:

TrueNAS Example (apps user = UID 34):

docker run -d \
  --name fspulse \
  -p 8080:8080 \
  -e PUID=34 \
  -e PGID=34 \
  -e TZ=America/New_York \
  -v /mnt/pool/fspulse/data:/data \
  -v /mnt/pool/documents:/roots/docs:ro \
  gtunesdev/fspulse:latest

Unraid Example (custom UID 1001):

docker run -d \
  --name fspulse \
  -p 8080:8080 \
  -e PUID=1001 \
  -e PGID=100 \
  -v /mnt/user/appdata/fspulse:/data \
  -v /mnt/user/photos:/roots/photos:ro \
  gtunesdev/fspulse:latest

Why PUID/PGID Matters

Even though you mount directories as read-only (:ro), Linux permissions still apply. If your files are owned by UID 34 and aren’t world-readable, fsPulse (running as UID 1000 by default) won’t be able to scan them. Setting PUID=34 makes fsPulse run as the same user that owns the files.

When to use PUID/PGID:

  • Files have restrictive permissions (not world-readable)
  • Using NAS with specific user accounts (TrueNAS, Unraid, Synology)
  • You need the /data directory to match specific host ownership

Advanced Topics

Custom Network Settings

If you’re using macvlan or host networking, ensure the server binds to all interfaces:

services:
  fspulse:
    image: gtunesdev/fspulse:latest
    network_mode: host
    environment:
      - FSPULSE_SERVER_HOST=0.0.0.0  # Required for non-bridge networking
      - FSPULSE_SERVER_PORT=8080

Note: The Docker image sets FSPULSE_SERVER_HOST=0.0.0.0 by default, so this is only needed if your config.toml overrides it to 127.0.0.1.

Reverse Proxy Setup

For public access with authentication, use a reverse proxy like nginx:

server {
    listen 80;
    server_name fspulse.example.com;

    location / {
        proxy_pass http://localhost:8080;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;

        # WebSocket support for scan progress
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
    }
}

Using Bind Mounts Instead of Volumes

By default, we use Docker volumes (-v fspulse-data:/data) which Docker manages automatically. For NAS deployments, you might prefer bind mounts to integrate with your existing backup schemes:

# Create directory on host
mkdir -p /mnt/pool/fspulse/data

# Use bind mount
docker run -d \
  --name fspulse \
  -p 8080:8080 \
  -v /mnt/pool/fspulse/data:/data \  # Bind mount to host path
  gtunesdev/fspulse:latest

Benefits of bind mounts for NAS:

  • Included in your NAS snapshot schedule
  • Backed up with your existing backup system
  • Directly accessible for manual inspection

Trade-off: You need to manage permissions yourself (use PUID/PGID if needed).


Troubleshooting

Cannot Access Web UI

Problem: http://localhost:8080 doesn’t respond

Solutions:

  1. Check the container is running:

    docker ps | grep fspulse
    
  2. Check logs for errors:

    docker logs fspulse
    

    Look for “Server started” message.

  3. Verify port mapping:

    docker port fspulse
    

    Should show 8080/tcp -> 0.0.0.0:8080

Permission Denied Errors

Problem: “Permission denied” when scanning or accessing /data

Solutions:

  1. Check file ownership:

    ls -ln /path/to/your/files
    
  2. Set PUID/PGID to match file owner:

    docker run -e PUID=1000 -e PGID=1000 ...
    
  3. For bind mounts, ensure host directory is writable:

    chown -R 1000:1000 /mnt/pool/fspulse/data
    

Configuration Changes Don’t Persist

Problem: Settings revert after container restart

Solution: Verify /data volume is mounted:

docker inspect fspulse | grep -A 10 Mounts

If missing, recreate container with volume:

docker stop fspulse
docker rm fspulse
docker run -d --name fspulse -v fspulse-data:/data ...

Database Locked Errors

Problem: “Database is locked” errors

Cause: Multiple containers accessing the same database

Solution: Only run one fsPulse container per database. Don’t mount the same /data volume to multiple containers.


Data Backup

Backing Up Your Data

For Docker volumes:

# Stop container
docker stop fspulse

# Backup volume
docker run --rm \
  -v fspulse-data:/data \
  -v $(pwd):/backup \
  alpine tar czf /backup/fspulse-backup.tar.gz /data

# Restart container
docker start fspulse

For bind mounts:

# Simply backup the host directory
tar czf fspulse-backup.tar.gz /mnt/pool/fspulse/data

Restoring from Backup

For Docker volumes:

# Create volume
docker volume create fspulse-data-restored

# Restore data
docker run --rm \
  -v fspulse-data-restored:/data \
  -v $(pwd):/backup \
  alpine sh -c "cd / && tar xzf /backup/fspulse-backup.tar.gz"

# Use restored volume
docker run -d --name fspulse -v fspulse-data-restored:/data ...

For bind mounts:

tar xzf fspulse-backup.tar.gz -C /mnt/pool/fspulse/data

Image Tags and Updates

fsPulse provides multiple tags for different update strategies:

TagDescriptionWhen to Use
latestLatest stable releaseProduction (pinned versions)
1.2.3Specific versionProduction (exact control)
1.2Latest patch of minor versionProduction (auto-patch updates)
mainDevelopment buildsTesting new features

Recommendation: Use specific version tags (1.2.3) or minor version tags (1.2) for production. Avoid latest in production to prevent unexpected updates.

Updating to a new version:

docker pull gtunesdev/fspulse:1.2.3
docker stop fspulse
docker rm fspulse
docker run -d --name fspulse -v fspulse-data:/data ... gtunesdev/fspulse:1.2.3

Your data persists in the volume across updates.


Platform Support

fsPulse images support multiple architectures—Docker automatically pulls the correct one for your platform:

  • linux/amd64 - Intel/AMD processors (most common)
  • linux/arm64 - ARM processors (Apple Silicon, Raspberry Pi 4, ARM servers)

Next Steps


Getting Help

First Steps

This guide walks you through your first scan and basic usage of fsPulse.

Starting the Web Interface

Launch fsPulse:

fspulse

Open your browser to http://127.0.0.1:8080

You’ll see the Home page. On first launch with no roots configured, it will display a welcome message prompting you to add your first root.

Adding Your First Scan Root

  1. Navigate to the Roots page (in the utility section of the sidebar)
  2. Click Add Root
  3. Enter the path to the directory you want to monitor
  4. Save

Running Your First Scan

  1. From the Roots page, click Scan Now for your newly added root
  2. Watch the real-time progress on the Home page — you’ll see live statistics as fsPulse scans, sweeps for deleted items, and analyzes files
  3. Once complete, explore the results

Exploring Your Data

After your first scan completes:

  • Home — See the health status of your root, including any new integrity issues
  • Browse — Navigate your filesystem hierarchy with tree, folder, or search views. Open the detail panel to inspect any file’s metadata, size history, version history, and integrity state.
  • Integrity — Review any integrity issues detected (suspect hash changes, validation failures)
  • Trends — View charts and trends (requires multiple scans to generate meaningful data)
  • Data Explorer — Run queries against your scan data using the visual builder or free-form query language

Setting Up Scheduled Scans

  1. Navigate to the Schedules page
  2. Click Add Schedule
  3. Select your root and configure:
    • Schedule type (daily, weekly, monthly, interval)
    • Time/day settings
    • Scan options (hashing, validation)
  4. Save the schedule

Scheduled scans will run automatically based on your configuration. You can see upcoming tasks on the Home page and the full activity log on the History page.

Next Steps

MCP Server (Experimental)

fsPulse includes a built-in Model Context Protocol (MCP) server that allows AI agents to query scan data, analyze integrity issues, and explore filesystem history.

Experimental: MCP support in fsPulse is experimental. You may experience connectivity issues depending on your client and connection method. See Setup for details on known limitations.

The MCP endpoint is served at /mcp on the same port as the web UI. It uses the Streamable HTTP transport, compatible with Claude Desktop, Claude Code, and other MCP clients.

What Can an Agent Do?

The agent has access to fsPulse’s full data model — roots, scans, items, versions, and hashes — through 10 tools. It can:

  • Explore — browse directory trees and search for files at any point in time
  • Query — run structured queries with filtering, aggregation, and ordering across all domains
  • Analyze — investigate integrity issues, track storage growth, and identify high-churn files
  • Report — summarize activity over time periods, compare scans, and generate trend data

The most effective use is iterative: start with a broad question (“what changed this week?”), then drill into specific folders, files, or time ranges based on what the agent finds. See Sample Prompts for examples.

Pagination

All tools return at most 200 rows per call. The agent handles pagination automatically — if results are truncated, it can request the next page. You don’t need to think about pagination in your prompts; the agent manages it as needed.

Contents

  • Setup — Enable MCP and configure your client
  • Sample Prompts — Example prompts and multi-step investigation workflows
  • Tools — The 10 tools available to AI agents

Setup (Experimental)

Experimental: MCP support in fsPulse is experimental. The connection methods described below each have known limitations that are under active investigation. You may need to restart fsPulse or your client if connectivity issues occur.

Enable the MCP Server

The MCP server is disabled by default. Enable it in the fsPulse Settings page under MCP Server (Experimental), or add the following to your config.toml:

[mcp]
enabled = true

Restart fsPulse after changing this setting. You should see MCP server enabled at /mcp in the startup output.

Choosing a Connection Method

There are two ways to connect an AI client to fsPulse’s MCP server:

MethodClientSetup Effort
Claude DesktopClaude DesktopLow
Claude CodeClaude CodeLow

Claude Desktop

Claude Desktop connects to fsPulse using the Developer settings JSON config with mcp-remote as a stdio-to-HTTP bridge. This requires Node.js (for npx).

Prerequisites

  • Node.js must be installed (provides npx)
  • On macOS, Node.js must have Local Network access (check System Settings > Privacy & Security > Local Network if connecting to a fsPulse instance on another machine on your network)

Configuration

Open Claude Desktop’s configuration file by going to Settings > Developer (under “Desktop app”) and clicking Edit Config. Add an entry under mcpServers:

{
  "mcpServers": {
    "fspulse": {
      "command": "npx",
      "args": [
        "mcp-remote",
        "http://localhost:8080/mcp",
        "--allow-http"
      ]
    }
  }
}

Replace localhost:8080 with the hostname and port of your fsPulse instance if it is running on a different machine.

Note: The --allow-http flag is required by mcp-remote when connecting over HTTP. If your fsPulse instance is served over HTTPS, you can omit this flag.

Restart Claude Desktop. fsPulse should appear as an available MCP server.

Claude Code

Claude Code supports Streamable HTTP natively, with no bridge required. Add to your .mcp.json:

{
  "mcpServers": {
    "fspulse": {
      "type": "streamable-http",
      "url": "http://localhost:8080/mcp"
    }
  }
}

Multiple Instances

You can connect to multiple fsPulse instances by giving each a unique name:

Claude Desktop

{
  "mcpServers": {
    "fspulse-local": {
      "command": "npx",
      "args": ["mcp-remote", "http://localhost:8080/mcp", "--allow-http"]
    },
    "fspulse-remote": {
      "command": "npx",
      "args": ["mcp-remote", "http://my-server:8080/mcp", "--allow-http"]
    }
  }
}

Claude Code

{
  "mcpServers": {
    "fspulse-local": {
      "type": "streamable-http",
      "url": "http://localhost:8080/mcp"
    },
    "fspulse-remote": {
      "type": "streamable-http",
      "url": "http://my-server:8080/mcp"
    }
  }
}

Reference a specific instance by name in your prompts:

Show me the integrity report on fspulse-remote

Sample Prompts

Once connected, try prompts like these. The agent will choose which tools to call based on your request.

Getting Started

Give me an overview of what fsPulse is monitoring

Are there any integrity issues? Show me the details

File Analysis

What are the largest files being tracked?

Break down the files by extension with counts and total sizes

How many files of each type are there? Which types take the most space?

Integrity Investigation

Show me all suspect hash observations

Show me validation failures for PDF files

Scan History

Show me recent scan activity for /home/user/Documents

What changed in the last scan?

Browsing and Searching

List the contents of /Users/greg/Documents

Search for files named “report”

Aggregation

Count files by extension for root 3, sorted by total size

How many scans has each root had?

Multi-Step Investigations

The most powerful use of fsPulse with an AI agent is iterative investigation — start with a high-level question and drill down based on what you find. These are examples of conversations, not single prompts.

Activity Report for a Time Period

Start broad, then focus:

What changed in root 1 between March 1 and March 15?

From the results, you might follow up with:

Drill into the /photos/2026 folder — what was added there?

Which files were modified more than once during that period?

Storage Growth Analysis

Show me how the total size of root 2 has changed over the last 20 scans

Graph the file count and total size trends

Which folders are growing the fastest? Break down size by top-level directory

Investigating Churn

Which files in root 1 have been modified the most times?

Show me the version history for that file — how has its size changed over time?

Graph the size of that file over its version history

Integrity Triage

Show me all unreviewed integrity issues for root 3

Focus on the PDF validation failures — are they concentrated in a specific folder?

Show me the version history for the files with suspect hashes — when did the hash first change?

Tools

The MCP server provides 10 tools. The AI agent selects which tools to call based on your prompt.

Pagination

All tools return at most 200 rows per call. Most tools accept limit (default 50, max 200) and offset (default 0) parameters for pagination. Total counts are included in responses, with next-offset hints when more results are available.

For query_data, pagination is controlled via LIMIT and OFFSET in the query string itself. Use query_count to get total row counts before paginating.

system_overview

High-level summary of all monitored roots with latest scan stats (file/folder counts, total monitored size), unreviewed integrity issue counts, and database path/size.

query_data

Execute a query using the fsPulse query DSL. Supports all five domains (items, versions, hashes, scans, roots), filtering, aggregation with GROUP BY, and ordering. Date columns can be displayed as date-only (@short), date+time (@full), or Unix epoch (@timestamp), and all three formats can be used as filter input. Results are capped at 200 rows; use LIMIT and OFFSET in the query string to paginate. Returns results as a formatted table.

query_count

Count rows matching a query without returning the data. Useful for understanding data volumes before paginating with query_data.

query_help

Returns documentation for the query DSL, including available columns and filter syntax for each domain. The agent uses this to learn valid column names before constructing queries.

integrity_report

Report of items with integrity issues (validation failures, suspect hashes) for a specific root. Supports filtering by issue type, review status, file extension, and path. Supports pagination via limit/offset. Returns total count.

scan_history

Scan history for a root showing file counts, sizes, change rates, and integrity findings over time. Supports pagination via limit/offset. Returns total count.

browse_filesystem

Browse the filesystem tree at a specific point in time. Lists immediate children of a directory within a root at a given scan. Supports pagination via limit/offset. Returns total count.

search_files

Search for files and directories by name within a root at a specific point in time. Supports pagination via limit/offset. Returns total count.

item_detail

Detailed information about a specific item including its version history, size changes, and integrity state. Version list supports pagination via limit/offset.

scan_changes

Show what files were added, modified, or deleted in a specific scan. Can filter by change type. Supports pagination via limit/offset. Returns total count.

Scanning

fsPulse scans are at the core of how it tracks changes to the filesystem over time. A scan creates a snapshot of a root directory and detects changes compared to previous scans. This page explains how to initiate scans, how incomplete scans are handled, and the phases involved in each scan.


Initiating a Scan

fsPulse runs as a web service, and scans are initiated through the web UI:

  1. Start the server: fspulse serve
  2. Open http://localhost:8080 in your browser (or the custom port you’ve configured)
  3. Navigate to the Roots page to add directories to monitor
  4. Start a manual scan or let a schedule trigger one
  5. Monitor real-time progress on the Home page

The web UI supports both scheduled automatic scans and manual on-demand scans. You can create recurring schedules on the Schedules page (daily, weekly, monthly, or custom intervals) or initiate individual scans from the Roots page as needed.

Once a scan on a root has begun, it must complete or be explicitly stopped before another scan on the same root can be started. Scans on different roots can run independently.


Hashing

Hashing is a key capability of fsPulse.

fsPulse uses the standard SHA2 (256) message-digest algorithm to compute digital fingerprints of file contents. The intent of hashing is to enable the detection of changes to file content in cases where the modification date and file size have not changed. One example of a case where this might occur is bit rot (data decay).

When configuring a scan in the web UI, you can enable hashing with these options:

  • Hash changed items (default): Compute hashes for items that have never been hashed or whose file size or modification date has changed
  • Hash all items: Hash all files, including those that have been previously hashed

If a hash is detected to have changed without a corresponding metadata change (modification date or size), the file’s hash_state is set to Suspect. These suspect hashes are surfaced on the Integrity page for review. If metadata did change, the hash change is considered legitimate and hash_state remains Baseline.

Hash States

Hash state tracks the integrity of a file’s content hash over time. Hash states are stored in the database as:

  • U: Unknown. No hash has been computed for this file version
  • B: Baseline. The first hash computed for a version, establishing the reference point
  • S: Suspect. Hash changed without a corresponding metadata change

The first hash computed for any item version is always Baseline — it establishes the reference against which future hashes are compared. If a subsequent hash differs without a metadata change, it is marked Suspect. When an item’s metadata changes, a new version is created and the next hash establishes a new Baseline.

Finding Hash Changes

You can investigate hash changes through the web UI:

  • Integrity Page: Shows all suspect hash changes with review status filtering
  • Browse Page: Click any item to see its version history including hash changes
  • Data Explorer: Use the Query tab to run custom queries

Example query to find items with suspect hashes (run on the Data Explorer’s Query tab):

items where hash_state:(S) show default, hash_state, file_hash order by item_path

Validating

fsPulse can attempt to assess the “validity” of files.

fsPulse uses community-contributed libraries to “validate” files. Validation is implemented as opening and reading or traversing the file. These community libraries raise a variety of “errors” when invalid content is encountered.

fsPulse’s ability to validate files is limited to the capabilities of the libraries that it uses, and these libraries vary in terms of completeness and accuracy. In some cases, such as fsPulse’s use of lopdf to validate PDF files, false positive “errors” may be detected as a consequence of lopdf encountering PDF file contents it does not yet understand. Despite these limitations, fsPulse offers a unique and effective view into potential validity issues in files.

See Validators for the complete list of supported file types.

When configuring a scan in the web UI, you can enable validation with these options:

  • Validate changed items (default): Validate files that have never been validated or have changed in terms of modification date or size
  • Validate all items: Validate all files regardless of previous validation status

Validation States

Validation applies only to files — folders do not have a validation state. Validation states are stored in the database as:

  • U: Unknown. No validation has been performed
  • N: No Validator. No validator exists for this file type
  • V: Valid. Validation was performed and no errors were encountered
  • I: Invalid. Validation was performed and an error was encountered

In the case of ‘I’ (Invalid), the validation error message is stored alongside the validation state. When an item’s validation state changes, a new item version is created capturing both the old and new states.

Validation is a one-time operation per version. Once a version has been validated, it is not re-validated — the result is permanently associated with that version.

Finding Validation Issues

You can investigate validation failures through the web UI:

  • Integrity Page: Shows all items with validation failures, with filtering and review status tracking
  • Browse Page: Click any item to see its validation status and error details in the inline detail panel
  • Data Explorer: Use the Query tab to run custom queries

Example query to find items currently in an invalid validation state:

items where val_state:(I) show default, val_error order by item_path

Additional queries can filter on specific validation states. See Query Syntax for details.


In-Progress Scans

fsPulse is designed to be resilient to interruptions like system crashes or power loss. If a scan stops before completing, fsPulse saves its state so it can be resumed later.

When you attempt to start a new scan on a root that has an in-progress scan, the web UI will prompt you to:

  • Resume the scan from where it left off
  • Stop the scan and discard its partial results

Stopping a scan reverts the database to its pre-scan state using an undo log. All detected versions, computed hashes, and validations from that partial scan will be discarded.


Phases of a Scan

Each scan proceeds in three main phases:

1. Scanning

The directory tree is deeply traversed. For each file or folder encountered:

  • If not seen before:
    • A new item identity is created
    • A new item version is inserted with first_scan_id = current_scan
  • If seen before:
    • fsPulse compares current filesystem metadata:
      • Modification date (files and folders)
      • File size (files only)
    • If metadata differs, a new item version is created
    • If unchanged, the existing version’s last_scan_id is updated in place
  • If the path matches a deleted item (previous version has is_deleted = true):
    • A new version is created with is_deleted = false (rehydration)

Files and folders are treated as distinct types. A single path that appears as both a file and folder at different times results in two separate items.

Writes are batched (100 items per transaction) for performance. An undo log records in-place updates to support rollback if the scan is stopped.


2. Sweeping

fsPulse identifies items not seen during the current scan:

  • Any item whose current version is not deleted and was not visited in this scan gets a new version with is_deleted = true.

Moved files appear as deletes and adds, as fsPulse does not track move operations.


3. Analyzing

This phase runs only if hashing and/or validation is enabled (see Hashing and Validating above).

  • Hashing — Computes a SHA-256 hash of file contents
  • Validation — Uses file-type-specific validators to check content integrity (see Validators)

Hash results are stored in a separate hash_versions table, bound to the current item version. Validation results are written directly onto the item version row. See Concepts for details on how hash and validation state are tracked.


Performance and Threading

The analysis phase runs in parallel:


Summary of Phases

PhasePurpose
ScanningTraverses the filesystem, creates or updates item versions
SweepingMarks missing items as deleted with new version rows
AnalyzingComputes hashes and validates files, updating or creating versions

Each scan provides a consistent view of the filesystem at a moment in time. The temporal versioning model means you can reconstruct the exact state of any item at any scan point.

Interface

The fsPulse interface provides a comprehensive visual environment for monitoring your filesystems, managing scans, analyzing trends, and investigating issues.

Overview

Access the interface by running:

fspulse

By default, the interface is available at http://127.0.0.1:8080. You can customize the host and port through configuration or environment variables (see Configuration).

The left sidebar organizes pages into two groups:

Primary — the pages you’ll use most often:

  • Home — Health overview showing root status, active tasks, and recent activity
  • Browse — Navigate filesystem hierarchy with tree, folder, and search views
  • Integrity — Review suspect hashes and validation failures
  • Trends — Visualize historical data with interactive charts

Utility — operational and investigative pages:

  • History — Scan and task activity log
  • Roots — Add, remove, and scan monitored directories
  • Schedules — Create and manage automated scan schedules
  • Data Explorer — Query interface for advanced data analysis
  • Settings — Edit configuration, view database stats and system info

The sidebar can be collapsed to icon-only mode for more screen space:

  • Click the collapse button in the sidebar footer
  • Use the keyboard shortcut Cmd+B (macOS) or Ctrl+B (Windows/Linux)
  • Click the sidebar’s right edge (rail) to toggle
  • The sidebar automatically collapses on narrower screens and expands on wider ones

When collapsed, hovering over an icon shows a tooltip with the page name.

Shared Root Context

When you select a root on Browse, Integrity, Trends, Schedules, or History, that selection is carried across pages via a URL parameter (?root_id=N). Selecting a root on Browse and then navigating to Integrity automatically pre-selects the same root, so you don’t need to re-select it on every page.

Live Updates

The web interface uses WebSocket connections to provide real-time updates during task execution. When a scan is running, you can watch progress updates, statistics, and phase transitions as they happen. The sidebar footer also shows a compact progress indicator for the active task.

Home

The Home page is the landing page of fsPulse. It answers the most important question at a glance: “Is my data safe?”


Root Health Summary

The centerpiece of the Home page is the root health summary, which shows the status of every monitored directory:

  • Root path — The directory being monitored
  • Last scan time — When the root was last scanned, with staleness indicators for roots that haven’t been scanned recently
  • Last scan outcome — Whether the most recent scan completed successfully, stopped, or errored

Each root row is clickable — clicking navigates to the Browse page for that root.

When no roots have been configured, the Home page shows a welcome message with a link to the Roots page to add your first root.


Active Task

When a scan or other task is running, the Home page displays a progress card showing:

  • Task type and target root
  • Current phase (Scanning, Sweeping, Analyzing Files, Analyzing Scan)
  • Real-time statistics — files and folders processed, items hashed, items validated
  • Progress indicator

Progress updates are delivered in real time via WebSocket. When no task is running, the card shows an idle state with a button to initiate a scan.


Upcoming Tasks

A table shows tasks queued for execution, typically generated from schedules:

  • Task type and target root
  • Scheduled run time
  • Source schedule

Tasks execute sequentially — only one task runs at a time.


Recent Activity

A compact summary of recent scan and task activity, showing the last several completed operations with their outcomes. Each entry shows change counts (adds, modifications, deletions) and an Integrity column that links to the Integrity page showing any new validation errors or suspect hashes detected during that scan. For the full activity log, click the View All link to navigate to the History page.


Scan Now

Click the scan button on the active task card to initiate a scan. You’ll select:

  • Which root to scan
  • Hashing mode (None, Hash changed items, Hash all items)
  • Validation mode (None, Validate changed items, Validate all items)

Pause / Resume

fsPulse supports pausing all task execution. When paused:

  • A banner appears showing how long tasks have been paused and when the pause will expire (if a duration was set)
  • No new tasks will start until resumed
  • You can edit the pause duration or resume immediately

This is useful when you want to temporarily prevent scans from running (for example, during a backup window or heavy system load).

Browse

The Browse page is an investigation workbench for navigating your filesystem hierarchy, comparing states across time, and inspecting individual items in detail.

Browse Cards

Each browse card is a self-contained browsing environment with its own root selection, scan selection, view mode, and optional detail panel.

Root and Scan Selection

At the top of each card:

  • Root picker: Select which root directory to browse. If you navigated here from another page with a root selected, it will be pre-selected via the shared root context.
  • Scan bar: Shows the current scan (e.g., “Scan #42 — 15 Jan at 10:30”) with a calendar button to pick a different scan and a “Latest” button to jump to the most recent scan.

The calendar picker highlights dates that have scans and disables dates without scans. When you select a date, it shows all scans for that day with their change summaries (adds, modifications, deletions). If you select a date with no scans, the nearest available scan is shown.

View Modes

Three view modes are available via tabs:

Tree View

A hierarchical expand/collapse tree showing the filesystem structure at the selected scan point:

  • Click the chevron on folders to expand or collapse
  • Click an item name to select it and open the detail panel
  • Children are loaded on demand for performance
  • Folder expansion state is preserved when switching between scans or when the tree refreshes
  • Deleted items shown with strikethrough and a trash icon when “Show deleted” is enabled
  • Items are color-coded by change type: green (added), blue (modified), red (deleted), gray (unchanged)

Folder View

A flat, breadcrumb-navigated view similar to a file explorer:

  • Breadcrumb ribbon at the top shows the current path — click any segment to navigate up
  • Sortable columns: Name, Size, Modified Date — click column headers to sort (ascending/descending)
  • Folder navigation: Click a folder’s icon to navigate into it
  • Item selection: Click an item’s name to select it for the detail panel
  • Directories are always sorted first, then by the selected column
  • When switching from Folder view to Tree view, the tree automatically reveals and expands to your current folder location

Search View

A text search across all items in the selected root and scan:

  • Type in the search box to filter items by path (with debounce)
  • Results appear as a flat list with parent path context shown below each item name
  • Click any result to select it for the detail panel
  • Typing in the search box automatically switches to the Search tab

Integrity Filters

A collapsible filter panel lets you narrow the view by integrity status across three dimensions:

Change Kind: Filter by Added, Modified, Deleted, or Unchanged items.

Hash State: Filter by Baseline, Unknown, or Suspect hash states. Folders are shown if they contain descendants matching the selected states.

Validation State: Filter by Valid, Invalid, Unknown, or No Validator states. Works the same as hash state filtering with descendant logic for folders.

Items must pass all active filter dimensions to be visible. Active filters are indicated with visual highlights on the filter buttons.

Controls

  • Show deleted: Toggle to include or hide deleted items across all views
  • Each view mode maintains its own independently selected item — switching tabs preserves your selection in each

Detail Panel

Clicking any item opens an inline detail panel within the card. The panel can be positioned on either side of the card using the flip button.

Item Information

  • File/folder type label and icon
  • File size (formatted compactly)
  • Modification time
  • Path with deletion status indicator
  • First seen scan and total version count

File Integrity (Files Only)

  • Hash state: Displayed as Baseline, Suspect, or Unknown with a colored icon. Click to expand/collapse the full SHA-256 hash value. If suspect, a review toggle is shown.
  • Validation state: Displayed as Valid, Invalid, Unknown, or No Validator. If invalid, the validation error message is shown with a review toggle.
  • Validate toggle: Enable or disable future validation for this item.

Directory Children Counts (Directories Only)

  • Immediate file and folder counts
  • Change breakdown: added (green), modified (blue), deleted (red), unchanged (gray)
  • Integrity breakdown: suspect hash count, invalid item count

Size History

  • Interactive line chart showing how the item’s size has changed over time
  • Configurable time window: 7 days, 30 days, 3 months, 6 months, 1 year

Version History

  • Chronological list of all item versions, each showing its change type (Initial, Modified, Deleted, Restored)
  • Each version displays its temporal range (first scan to last scan)
  • Expand a version to see exactly what changed: modification date, size, hash, access state, validation state, hash state
  • For directories: shows changes in descendant counts and integrity counts
  • The version corresponding to the currently viewed scan is highlighted with an eye icon
  • Load older versions on demand

Close the detail panel by clicking the close button.


Comparison Mode

Click Show Compare in the page header to open a second browse card side by side. Each card operates independently — you can:

  • Compare across time: Same root, different scans — see how the filesystem changed
  • Compare across roots: Different roots (e.g., original vs. backup) at the same or different scans

When both cards have detail panels open, the panels appear adjacent in the center for easy comparison (Card A’s panel is on the right, Card B’s panel is on the left).

Close the second card by clicking Show Compare again.


Use Cases

  • Investigation: Drill into specific files from the Integrity page, then inspect version history and validation errors
  • Capacity analysis: Use Folder view sorted by Size to find what’s consuming space
  • Change review: Browse at two scan points in comparison mode to see exactly what changed
  • Integrity audit: Use hash state and validation state filters to focus on files with suspect or invalid states
  • Verification: Check hash and validation status of critical files in the detail panel
  • Point-in-time browsing: Use the scan picker to see the filesystem as it was at any past scan

Integrity

The Integrity page provides a centralized view for reviewing and managing integrity issues detected during scans. It surfaces two kinds of issues: suspect hashes (file content changed without a metadata change) and validation errors (format validation detected corruption).

Issue Types

Suspicious Hashes

Detected when:

  • A file’s hash changes between scans
  • The file’s modification time and size have NOT changed

This pattern indicates potential:

  • Bit rot (silent data corruption)
  • Tampering or malicious modification
  • Filesystem anomalies

Validation Errors

Detected when format validation fails:

  • FLAC audio files with invalid structure
  • JPEG/PNG images that fail format checks
  • PDF files with corruption
  • Other validated file types with detected issues

See Validators for details on supported file types.

Review Status

Each integrity issue can be in one of two states:

  • Unreviewed: The issue has not been acknowledged by the user
  • Reviewed: The user has acknowledged the issue

Marking an issue as reviewed records a timestamp. Review status is tracked independently for hash issues and validation issues on each item version.

Filtering

Filter integrity issues by:

  • Issue type — Suspicious hashes, Validation errors, or All
  • File type — All file types, Image files, PDF files, Audio files
  • Review status — Not Reviewed, Reviewed, or All
  • Root — Show issues for a specific monitored directory
  • Path search — Filter by item path
  • Show deleted — Include or exclude items that are currently deleted

Tip: If you select a root on the Browse or Trends page before navigating to Integrity, the same root will be pre-selected automatically via the shared root context.

Issue Table

The main table shows one row per item that has integrity issues. Each row displays:

  • Validate toggle — Enable or disable future validation for this item
  • File name — With parent folder context
  • Hashes — Count of unreviewed and reviewed hash issues
  • Validation — Count of unreviewed and reviewed validation issues
  • Review All — Button to mark all issues on this item as reviewed

Expanding Items

Click the expand toggle on any row to see the version history for that item, showing detailed hash and validation state for each version. From the expanded view you can review individual issues at the version level.

Reviewing Issues

Reviews are a lightweight acknowledgment mechanism — they indicate that you have seen and considered an integrity issue.

  • Review individual issues: Expand an item and use the review toggle on a specific version’s hash or validation issue
  • Review all issues on an item: Click the “Review All” button on the item row
  • Review status can be toggled back to unreviewed if needed

Integration with Browse

Integrity issues are also visible in the Browse page’s item detail panel, where you can see hash and validation state for each version and toggle review status directly.

Workflow Recommendations

  1. Check Home: The Home page shows integrity issue counts per root in the recent activity section
  2. Filter: Use issue type and review status filters to focus on what matters
  3. Investigate: Expand items to see version details, or click through to Browse for full context
  4. Review: Mark issues as reviewed once you’ve assessed them
  5. Track: Monitor integrity trends on the Trends page

Trends

The Trends page provides interactive visualizations showing how your data evolves over time across multiple scans.

Available Charts

Track total storage usage over time:

  • See growth or reduction in directory sizes
  • Identify storage bloat
  • Displayed in both decimal (GB) and binary (GiB) units

Monitor the number of items:

  • Total files and folders over time
  • Detect unexpected additions or deletions
  • Separate trend lines for files vs. directories

Change Activity

Visualize filesystem activity:

  • Additions, modifications, and deletions per scan
  • Identify periods of high change
  • Understand modification patterns

Track integrity issues over time:

  • New validation failures per scan
  • New suspect hash changes per scan
  • Cumulative integrity issue counts

Features

Root Selection

Select which scan root to analyze from the dropdown. Each root maintains independent trend data.

Tip: If you select a root on the Browse or Integrity page before navigating to Trends, the same root will be pre-selected automatically via the shared root context.

Date Range Filtering

Customize the time window:

  • Last 7 days
  • Last 30 days
  • Last 3 months
  • Last 6 months
  • Last year
  • Custom range (manual date pickers)

Baseline Exclusion

The Changes and Integrity charts offer a checkbox to exclude the first scan from the visualization. The first scan of a root often shows large numbers of additions and integrity issues which can skew trend visualizations. This toggle only appears when the first scan falls within the selected date range.

Interactive Charts

  • Hover for detailed values
  • Pan and zoom on time ranges
  • Toggle data series on/off

Requirements

Trend analysis requires multiple scans of the same root. After your first scan, you’ll see a message prompting you to run additional scans to generate trend data.

Use Cases

  • Capacity Planning: Monitor storage growth rates
  • Change Detection: Identify unusual modification patterns
  • Validation Monitoring: Track data integrity over time
  • Baseline Comparison: See how your filesystem evolves from initial state

History

The History page provides a complete log of scan and task activity — answering questions like “Did my scheduled scan run?” and “What happened with that scan that errored?”


Activity Log

The History page shows a paginated table of completed scans and tasks with key information:

  • Scan ID and associated root
  • Start and end times
  • State (Completed, Stopped, Error)
  • Item counts — files, folders, and total size discovered
  • Change counts — additions, modifications, and deletions detected
  • Integrity — new validation errors and suspect hashes detected, linking to the Integrity page
  • Scan options — whether hashing and validation were enabled

Filtering

Filter the activity log by:

  • Root — Show activity for a specific monitored directory
  • Time range — Narrow to a date range
  • Outcome — Filter by completed, stopped, or errored

Use Cases

  • Schedule verification: Confirm that scheduled scans ran as expected
  • Troubleshooting: Identify scans that stopped or errored, and review their details
  • Trend awareness: Notice patterns in change counts or integrity issues across scans
  • Audit trail: Review what the system has been doing over any time period

Tip: The Home page shows a compact summary of recent activity. Use the History page when you need the full, filterable log.

Data Explorer

The Data Explorer provides both visual query building and free-form query capabilities for analyzing your fsPulse data. It is located in the utility section of the sidebar, designed for power users who need detailed data access beyond what the primary pages offer.

Overview

Data Explorer offers two ways to query your data:

  • Structured tabs (Roots, Scans, Items, Versions, Hashes) — Visual query builder with column selection, sorting, and filtering
  • Query tab — Free-form query entry using fsPulse’s query language

Structured Query Tabs

The Roots, Scans, Items, Versions, and Hashes tabs provide a visual interface for building queries without writing query syntax.

Layout

Each structured tab displays:

  • Column selector panel (left) — Configure which columns to display and how
  • Results table (right) — View query results with pagination

Column Controls

The column selector provides several controls for each available column:

ControlDescription
CheckboxShow or hide the column in results
Drag handleReorder columns by dragging
SortClick to cycle through ascending, descending, or no sort
FilterAdd a filter condition for this column

Working with Columns

Show/Hide Columns: Check or uncheck the box next to any column name to include or exclude it from results.

Reorder Columns: Drag columns using the grip handle to change the display order in the results table.

Sort Results: Click the sort control to cycle through no sort, ascending, and descending. Only one column can be sorted at a time.

Filter Data: Click the filter button to add a filter condition. Active filters display as badges showing the filter value. Click the X to remove a filter.

Reset: Click the reset button in the column header to restore all columns to their default visibility, order, and clear all filters and sorts.

Query Tab

The Query tab provides a free-form interface for writing queries using fsPulse’s SQL-inspired query language.

Features

  • Query input — Text area for entering queries
  • Execute — Run the query (or press Cmd/Ctrl + Enter)
  • Example queries — Expandable section with clickable sample queries
  • Documentation link — Quick access to the full query syntax reference
  • Results table — Paginated results display

Example Queries

The Query tab includes sample queries you can click to populate the input:

items limit 10
versions where is_current:(T) show item_path, size, mod_date limit 20
versions where is_current:(T), item_type:(F), size:(>1000000) show item_path, size order by size desc limit 20
versions where is_deleted:(T) show item_path, item_type, first_scan_id, last_scan_id order by last_scan_id desc limit 20
hashes where hash_state:(S) show item_path, item_version, file_hash limit 20

Query Domains

Both interfaces support querying five data domains:

DomainDescription
rootsConfigured scan roots
scansScan metadata and statistics
itemsItem identity — permanent properties of tracked files and directories
versionsItem version history — one row per distinct state over time
hashesHash observations — SHA-256 integrity records for item versions

When to Use Each Interface

Use structured tabs when:

  • Exploring data without knowing the exact query syntax
  • Quickly toggling columns to find relevant information
  • Building simple filters and sorts visually

Use the Query tab when:

  • Writing complex queries with multiple conditions
  • Using advanced query features (comparisons, ranges, multiple filters)
  • Reproducing a specific query you’ve used before
  • Learning the query syntax with immediate feedback

Query Syntax

For complete documentation on the query language including all operators, column names, and advanced features, see Query Syntax.

Roots, Schedules & Settings

fsPulse provides three dedicated pages for configuration: Roots for managing monitored directories, Schedules for automated scan timing, and Settings for application configuration and system information.


Roots

Adding a Root

  1. Click Add Root
  2. Enter the full filesystem path to monitor
  3. Optionally provide a friendly name
  4. Save

Managing Roots

  • Scan Now: Create a manual scan task for the root
  • Delete: Remove the root (also removes associated schedules)
  • View root statistics and last scan time

Schedules

fsPulse supports flexible scheduling options for automated monitoring.

Schedule Types

  • Daily: Run at a specific time each day
  • Weekly: Run on specific days of the week at a chosen time
  • Monthly: Run on a specific day of the month
  • Interval: Run every N hours/minutes

Creating a Schedule

  1. Click Add Schedule
  2. Select the root to scan
  3. Choose schedule type and timing
  4. Configure scan options:
    • Enable hashing (default: all files)
    • Enable validation (default: new/changed files)
  5. Save

Schedule Management

  • Enable/Disable: Temporarily pause schedules without deleting them
  • Edit: Modify timing or scan options
  • Delete: Remove the schedule

Scans and other tasks are queued and executed sequentially to prevent resource conflicts. You can view upcoming and running tasks on the Home page.


Configuration

A table displays all configurable settings with their values from each source:

ColumnDescription
SettingThe setting name
DefaultBuilt-in default value
Config FileValue from config.toml (if set)
EnvironmentValue from environment variable (if set)

The active value (the one fsPulse actually uses) is highlighted with a green border. Configuration precedence follows: Environment variable > Config file > Default.

Settings that require a server restart to take effect are marked with a restart indicator.

Editing Settings

Click Edit on any setting to modify its value in config.toml. The edit dialog provides:

  • Input validation (numeric ranges, valid log levels, etc.)
  • A Delete option to remove the setting from config.toml and revert to the default

See Configuration for details on all available settings and environment variables.

Database

Shows database statistics and maintenance tools:

  • Database path: Location of the fspulse.db file
  • Total size: Current size of the database file
  • Wasted space: Space that can be reclaimed through compaction

Compact Database: Over time, deletions and updates leave unused space in the SQLite database. The Compact Database button runs SQLite’s VACUUM command to reclaim this space. This operation runs as a background task.


About

Displays application metadata:

  • fsPulse version
  • Database schema version
  • Build date and git commit information
  • Links to documentation and the GitHub repository

Command-Line Interface

fsPulse is a web-first application. The CLI exists solely to launch the web server—all functionality including scanning, querying, browsing, and configuration is accessed through the Interface.


Starting fsPulse

To start the fsPulse server:

fspulse

Or explicitly:

fspulse serve

Both commands are equivalent. The server starts on http://127.0.0.1:8080 by default.

Once running, open your browser to access the full web interface for:

  • Managing scan roots and schedules
  • Running and monitoring scans
  • Browsing your filesystem data
  • Querying and exploring results
  • Reviewing integrity issues

Configuration

fsPulse behavior is configured through environment variables or a config file, not command-line flags.

Environment Variables

Set these before running fspulse:

# Server settings
export FSPULSE_SERVER_HOST=0.0.0.0    # Bind address (default: 127.0.0.1)
export FSPULSE_SERVER_PORT=9090       # Port number (default: 8080)

# Analysis settings
export FSPULSE_ANALYSIS_THREADS=16    # Worker threads (default: 8)

# Logging
export FSPULSE_LOGGING_FSPULSE=debug  # Log level (default: info)

# Data location
export FSPULSE_DATA_DIR=/custom/path  # Data directory override

fspulse

Configuration File

fsPulse also reads from config.toml in the data directory. See Configuration for complete documentation including:

  • All available settings
  • Environment variable reference
  • Platform-specific data directory locations
  • Docker configuration

Getting Help

View version and basic usage:

fspulse --help
fspulse --version

Query Syntax

fsPulse provides a flexible, SQL-like query language for exploring scan results. This language supports filtering, custom column selection, ordering, and limiting the number of results.


Query Structure

Each query begins with one of the five supported domains:

  • roots
  • scans
  • items
  • versions
  • hashes

You can then add any of the following optional clauses:

DOMAIN [WHERE ...] [GROUP BY ...] [SHOW ...] [ORDER BY ...] [LIMIT ...] [OFFSET ...]

Column Availability

Each domain has a set of available columns. Columns marked as default are shown when no SHOW clause is specified.

roots Domain

ColumnTypeDefault
root_idIntegerYes
root_pathPathYes

scans Domain

ColumnTypeDefaultDescription
scan_idIntegerYesUnique scan identifier
root_idIntegerYesRoot directory identifier
schedule_idIntegerYesSchedule identifier (null for manual scans)
started_atDateYesTimestamp when scan started
ended_atDateYesTimestamp when scan ended (null if incomplete)
was_restartedBooleanYesTrue if scan was resumed after restart
scan_stateScan State EnumYesState of the scan
is_hashBooleanYesHash new or changed files
hash_allBooleanNoHash all items including unchanged
is_valBooleanYesValidate new or changed files
file_countIntegerYesCount of files found in the scan
folder_countIntegerYesCount of directories found in the scan
total_sizeIntegerYesTotal size in bytes of all files
new_hash_suspect_countIntegerNoNew suspect hashes detected in this scan
new_val_invalid_countIntegerNoNew validation failures detected in this scan
add_countIntegerYesNumber of items added in the scan
modify_countIntegerYesNumber of items modified in the scan
delete_countIntegerYesNumber of items deleted in the scan
val_unknown_countIntegerNoFiles with unknown validation state
val_valid_countIntegerNoFiles with valid validation state
val_invalid_countIntegerNoFiles with invalid validation state
val_no_validator_countIntegerNoFiles with no available validator
hash_unknown_countIntegerNoFiles with unknown hash state
hash_baseline_countIntegerNoFiles with baseline hash state
hash_suspect_countIntegerNoFiles with suspect hash state
errorStringNoError message if scan failed

items Domain

The items domain queries item identity — the permanent properties of each tracked file or directory.

ColumnTypeDefaultDescription
item_idIntegerYesUnique item identifier
root_idIntegerYesRoot directory identifier
item_pathPathYesFull path of the item
item_namePathYesFilename or directory name (last segment)
file_extensionStringYesLowercase file extension (null for folders/extensionless)
item_typeItem Type EnumYesFile, Directory, Symlink, or Unknown
has_validatorBooleanNoTrue if a structural validator exists for this file type
do_not_validateBooleanNoTrue if user has opted this item out of validation

versions Domain

The versions domain queries individual item version rows — each representing a distinct state of an item over a temporal range. Filter with is_current:(T) to query only the latest version of each item.

ColumnTypeDefaultDescription
item_versionIntegerYesVersion number (per-item sequence)
item_idIntegerYesItem this version belongs to
root_idIntegerYesRoot directory identifier
item_pathPathYesFull path of the item
item_namePathNoFilename or directory name (last segment)
file_extensionStringNoLowercase file extension (null for folders/extensionless)
item_typeItem Type EnumYesFile, Directory, Symlink, or Unknown
first_scan_idIntegerYesScan where this version was first observed
last_scan_idIntegerYesLast scan confirming this version’s state
is_addedBooleanNoTrue if item was added in this version
is_deletedBooleanYesTrue if item was deleted in this version
is_currentBooleanNoTrue if this is the latest version of the item
accessAccess StatusNoAccess state
mod_dateDateYesLast modification date
sizeIntegerYesFile size in bytes
add_countIntegerNoDescendant items added (folders only; null for files)
modify_countIntegerNoDescendant items modified (folders only; null for files)
delete_countIntegerNoDescendant items deleted (folders only; null for files)
unchanged_countIntegerNoDescendant items unchanged (folders only; null for files)
val_scan_idIdNoScan in which this version was validated (NULL if not yet validated; may differ from first_scan_id)
val_stateValidation StatusNoValidation state (files only; null for folders)
val_errorStringNoValidation error message (files only; null for folders)
val_reviewed_atDateNoTimestamp when user marked a validation issue as reviewed (NULL until reviewed)
hash_reviewed_atDateNoTimestamp when user marked a hash integrity issue as reviewed (NULL until reviewed)

hashes Domain

The hashes domain queries hash observation records — each representing a SHA-256 hash computed for an item version during a scan.

ColumnTypeDefaultDescription
item_idIntegerYesItem this hash belongs to
item_versionIntegerYesVersion this hash was observed on
item_pathPathYesFull path of the item
item_namePathNoFilename or directory name (last segment)
first_scan_idIntegerYesScan where this hash was first observed
last_scan_idIntegerYesLast scan confirming this hash
file_hashHashYesSHA-256 content hash (hex)
hash_stateHash StateYesBaseline or Suspect

The WHERE Clause

The WHERE clause filters results using one or more filters. Each filter has the structure:

column_name:(value1, value2, ...)

Values must match the column’s type. You can use individual values, ranges (when supported), or a comma-separated combination.

TypeExamplesNotes
Integer5, 1..5, 3, 5, 7..9, > 1024, < 10, null, not nullSupports ranges, comparators, and nullability. Ranges are inclusive.
Date2024-01-01, 2024-01-01 14:30:00, 1711929600, null, not nullThree input forms (see below). Ranges are inclusive.
Booleantrue, false, T, F, null, not nullUnquoted.
String'example', 'error: missing EOF', null, not nullQuoted strings.
Path'photos/reports', 'file.txt'Must be quoted. Null values are not supported.
Validation StatusV, I, N, U, null, not nullValid, Invalid, No Validator, Unknown. Null for folders. Unquoted.
Hash StateV, S, U, null, not nullValid, Suspect, Unknown. Null for folders. Unquoted.
Item Type EnumF, D, S, UFile, Directory, Symlink, Unknown. Unquoted.
Scan State EnumS, W, AF, AS, C, P, EScanning, Sweeping, Analyzing Files, Analyzing Scan, Completed, Stopped, Error. A is shorthand for AF. Unquoted.
Access StatusN, M, RNo Error, Meta Error, Read Error. Unquoted.

Date Filter Formats

Date columns accept three input forms, matching the three display formats available via @short, @full, and @timestamp. Any value produced by a query can be used directly as filter input.

FormExampleBehavior
Date only2025-01-15Matches the entire day (00:00:00 through 23:59:59 local time)
Date and time2025-01-15 14:30:00Matches that exact second
Unix epoch1737936000Matches that exact second (10+ digits, UTC)

These forms can be mixed freely within a filter or range:

# Date-only range
started_at:(2025-01-01..2025-01-31)

# Exact time range
started_at:(2025-01-15 08:00:00..2025-01-15 17:00:00)

# Mixed forms in a range
started_at:(2025-01-15..2025-01-16 14:30:00)
mod_date:(1737936000..2025-02-01)

# Multiple values (OR'd)
started_at:(2025-01-15, 2025-02-01 09:00:00, 1737936000)

Combining Filters

When specifying multiple values within a single filter, the match is logically OR. When specifying multiple filters across different columns, the match is logically AND.

For example:

scans where started_at:(2025-01-01..2025-01-07, 2025-02-01..2025-02-07), is_hash:(T)

This query matches scans that:

  • Occurred in either the first week of January 2025 or the first week of February 2025
  • AND were performed with hashing enabled

The SHOW Clause

The SHOW clause controls which columns are displayed and how some of them are formatted. If omitted, a default column set is used.

You may specify:

  • A list of column names
  • The keyword default to insert the default set
  • The keyword all to show all available columns

Formatting modifiers can be applied using the @ symbol:

item_path@name, mod_date@short

Format Specifiers by Type

TypeAllowed Format Modifiers
Datefull, short, timestamp
Pathfull, relative, short, name
Validation / Hash State / Enum / Booleanfull, short
Integer / String(no formatting options)

All three date display formats (@short, @full, @timestamp) produce values that can be used directly as date filter input — see Date Filter Formats above.


The GROUP BY Clause

Groups rows by one or more columns and enables aggregate functions in the SHOW clause. When GROUP BY is used, a SHOW clause is required.

versions where is_current:(T), root_id:(1) group by file_extension show file_extension, count(*), sum(size) order by sum(size) desc

Aggregate Functions

FunctionApplies ToDescription
count(*)AnyCount all rows in the group
count(col)Any columnCount non-null values
sum(col)Integer columnsSum of values
avg(col)Integer columnsAverage of values
min(col)Integer, Date, Id columnsMinimum value
max(col)Integer, Date, Id columnsMaximum value

Rules

  • Every non-aggregate column in SHOW must also appear in GROUP BY
  • Aggregate functions can be used in ORDER BY (e.g., order by count(*) desc)

The ORDER BY Clause

Specifies sort order for the results. Supports both column names and aggregate expressions.

items order by mod_date desc, item_path asc
scans group by root_id show root_id, count(*) order by count(*) desc

If direction is omitted, ASC is assumed.


The LIMIT and OFFSET Clauses

LIMIT restricts the number of rows returned. OFFSET skips a number of rows before returning results.

items limit 50 offset 100

Examples

# Items whose path contains 'reports'
items where item_path:('reports')

# All PDF items
items where file_extension:('pdf')

# Current state of large files, sorted by size
versions where is_current:(T), item_type:(F), size:(> 1048576) show item_path, size order by size desc

# Version history for a specific item
versions where item_id:(42) order by first_scan_id

# Deleted versions across all roots
versions where is_deleted:(true) show item_path, item_type, first_scan_id, last_scan_id

# Versions with validation failures
versions where val_state:(I) show default, val_error order by first_scan_id desc

# Suspect hash observations
hashes where hash_state:(S) show item_path, item_version, file_hash

# All hash observations for a specific item
hashes where item_id:(42) order by first_scan_id

# Scans with timestamps for programmatic processing
scans show scan_id, started_at@timestamp, file_count order by started_at desc limit 10

# Scans with change and integrity counts
scans show scan_id, file_count, total_size, add_count, modify_count, delete_count, new_hash_suspect_count, new_val_invalid_count order by started_at desc

# File count and total size by extension
versions where is_current:(T), root_id:(1), item_type:(F) group by file_extension show file_extension, count(*), sum(size) order by sum(size) desc

# Scan count per root
scans group by root_id show root_id, count(*), max(total_size), max(file_count) order by count(*) desc

# Hash state distribution
hashes group by hash_state show hash_state, count(*)

# Validation failures by root
versions where val_state:(I) group by root_id show root_id, count(*)

See also: Data Explorer · Validators · Configuration

Configuration

fsPulse supports persistent, user-defined configuration through a file named config.toml. This file allows you to control logging behavior, analysis settings, server configuration, and more.

Web UI: Most configuration settings can also be viewed and edited through the Settings page in the web interface, which shows the active value and its source (default, config file, or environment variable).

📦 Docker Users: If you’re running fsPulse in Docker, see the Docker Deployment chapter for Docker-specific configuration including environment variable overrides and volume management.


Finding config.toml

The config.toml file is stored in fsPulse’s data directory. The location depends on how you’re running fsPulse:

Docker Deployments

When running in Docker, the data directory is /data, so the config file is located at /data/config.toml inside the container. fsPulse automatically creates this file with default settings on first run.

To access it from your host machine:

# View the config
docker exec fspulse cat /data/config.toml

# Extract to edit
docker exec fspulse cat /data/config.toml > config.toml

See the Docker Deployment chapter for details on editing the config in Docker.

Native Installations

fsPulse uses the directories crate to determine the platform-specific data directory location:

PlatformData Directory LocationExample Path
Linux$XDG_DATA_HOME/fspulse or $HOME/.local/share/fspulse/home/alice/.local/share/fspulse
macOS$HOME/Library/Application Support/fspulse/Users/alice/Library/Application Support/fspulse
Windows%LOCALAPPDATA%\fspulse\dataC:\Users\Alice\AppData\Local\fspulse\data

The config file is located at <data_dir>/config.toml.

On first run, fsPulse automatically creates the data directory and writes a default config.toml if one doesn’t exist.

Tip: You can delete config.toml at any time to regenerate it with defaults. Newly introduced settings will not automatically be added to an existing file.

Override: The data directory location can be overridden using the FSPULSE_DATA_DIR environment variable. See Data Directory and Database Settings for details.


Configuration Settings

Here are the current available settings and their default values:

[logging]
fspulse = "info"
lopdf = "error"

[server]
port = 8080
host = "127.0.0.1"

[analysis]
threads = 8

Logging

fsPulse uses the Rust log crate, and so does the PDF validation crate lopdf. You can configure logging levels independently for each subsystem in the [logging] section.

Supported log levels:

  • error – only critical errors
  • warn – warnings and errors
  • info – general status messages (default for fsPulse)
  • debug – verbose output for debugging
  • trace – extremely detailed logs

Log File Behavior

  • Logs are written to <data_dir>/logs/
  • Each run of fsPulse creates a new log file, named using the current date and time
  • Individual log files are capped at 50 MB; if a single run exceeds this, it continues in a new file
  • fsPulse retains up to 20 log files; older files are automatically deleted

Server Settings

The [server] section controls the web UI server behavior when running fspulse serve.

  • host: IP address to bind to (default: 127.0.0.1)
    • 127.0.0.1 - Localhost only (secure, only accessible from same machine)
    • 0.0.0.0 - All interfaces (required for Docker, remote access)
  • port: Port number to listen on (default: 8080)

Note: In Docker deployments, the host should be 0.0.0.0 to allow access from outside the container. The Docker image sets this automatically via environment variable.


Analysis Settings

The [analysis] section controls how many threads are used during the analysis phase of scanning (for hashing and validation).

  • threads: number of worker threads (default: 8)

You can adjust this based on your system’s CPU count or performance needs. fsPulse uses SHA-256 for file hashing to detect content changes and verify integrity.


Environment Variables

All configuration settings can be overridden using environment variables. This is particularly useful for:

  • Docker deployments where editing files is inconvenient
  • Different environments (development, staging, production) with different settings
  • NAS deployments (TrueNAS, Unraid) using web-based configuration UIs
  • CI/CD pipelines where configuration is managed externally

How It Works

Environment variables follow the pattern: FSPULSE_<SECTION>_<FIELD>

The <SECTION> corresponds to a section in config.toml (like [server], [logging], [analysis]), and <FIELD> is the setting name within that section.

Precedence (highest to lowest):

  1. Environment variables - Override everything
  2. config.toml - User-defined settings
  3. Built-in defaults - Fallback values

This allows you to set sensible defaults in config.toml and override them as needed per deployment.

Complete Variable Reference

Server Settings

Control the web UI server behavior (when running fspulse serve):

VariableDefaultValid ValuesDescription
FSPULSE_SERVER_HOST127.0.0.1IP addressBind address. Use 0.0.0.0 for Docker/remote access, 127.0.0.1 for localhost only
FSPULSE_SERVER_PORT80801-65535Web UI port number

Examples:

# Native - serve only on localhost
export FSPULSE_SERVER_HOST=127.0.0.1
export FSPULSE_SERVER_PORT=8080
fspulse serve

# Docker - must bind to all interfaces
docker run -e FSPULSE_SERVER_HOST=0.0.0.0 -e FSPULSE_SERVER_PORT=9090 -p 9090:9090 ...

Logging Settings

Configure log output verbosity:

VariableDefaultValid ValuesDescription
FSPULSE_LOGGING_FSPULSEinfoerror, warn, info, debug, tracefsPulse application log level
FSPULSE_LOGGING_LOPDFerrorerror, warn, info, debug, tracePDF library (lopdf) log level

Examples:

# Enable debug logging
export FSPULSE_LOGGING_FSPULSE=debug
export FSPULSE_LOGGING_LOPDF=error

# Docker
docker run -e FSPULSE_LOGGING_FSPULSE=debug ...

Analysis Settings

Configure scan behavior and performance:

VariableDefaultValid ValuesDescription
FSPULSE_ANALYSIS_THREADS81-24Number of worker threads for analysis phase (hashing/validation)

Examples:

# Use 16 threads for faster scanning
export FSPULSE_ANALYSIS_THREADS=16

# Docker
docker run -e FSPULSE_ANALYSIS_THREADS=16 ...

Data Directory and Database Settings

Control where fsPulse stores its data:

VariableDefaultValid ValuesDescription
FSPULSE_DATA_DIRPlatform-specificDirectory pathOverride the data directory location. Contains config, logs, and database (by default). Cannot be set in config.toml.
FSPULSE_DATABASE_DIR<data_dir>Directory pathOverride database directory only (advanced). Stores the database outside the data directory. This is a directory path, not a file path - the database file is always named fspulse.db

Data Directory:

The data directory contains configuration (config.toml), logs (logs/), and the database (fspulse.db) by default. It is determined by:

  1. FSPULSE_DATA_DIR environment variable (if set)
  2. Platform-specific project local directory (default):
    • Linux: $XDG_DATA_HOME/fspulse or $HOME/.local/share/fspulse
    • macOS: $HOME/Library/Application Support/fspulse
    • Windows: %LOCALAPPDATA%\fspulse\data
    • Docker: /data

Database Location:

By default, the database is stored in the data directory as fspulse.db. You can override this to store the database separately:

Database Directory Precedence:

  1. FSPULSE_DATABASE_DIR environment variable (if set) - highest priority
  2. config.toml [database] dir setting (if configured)
  3. Data directory (from FSPULSE_DATA_DIR or platform default)

Important Notes:

  • The database file is always named fspulse.db within the determined directory
  • Configuration and logs always remain in the data directory, even if the database is moved
  • For Docker: it’s recommended to use volume/bind mounts to /data rather than overriding FSPULSE_DATA_DIR

Docker-Specific Variables

These variables are specific to Docker deployments:

VariableDefaultValid ValuesDescription
PUID1000UID numberUser ID to run fsPulse as (for NAS permission matching)
PGID1000GID numberGroup ID to run fsPulse as (for NAS permission matching)
TZUTCTimezone stringTimezone for log timestamps and UI (e.g., America/New_York)

See Docker Deployment - NAS Deployments for details on PUID/PGID usage.

Usage Examples

Native (Linux/macOS/Windows):

# Set environment variables
export FSPULSE_SERVER_PORT=9090
export FSPULSE_LOGGING_FSPULSE=debug
export FSPULSE_ANALYSIS_THREADS=16

# Run fsPulse (uses env vars)
fspulse serve

Docker - Command Line:

docker run -d \
  --name fspulse \
  -e FSPULSE_SERVER_PORT=9090 \
  -e FSPULSE_LOGGING_FSPULSE=debug \
  -e FSPULSE_ANALYSIS_THREADS=16 \
  -p 9090:9090 \
  -v fspulse-data:/data \
  gtunesdev/fspulse:latest

Docker Compose:

services:
  fspulse:
    image: gtunesdev/fspulse:latest
    environment:
      - FSPULSE_SERVER_PORT=9090
      - FSPULSE_LOGGING_FSPULSE=debug
      - FSPULSE_ANALYSIS_THREADS=16
    ports:
      - "9090:9090"

Verifying Environment Variables

To see what environment variables fsPulse is using:

Native:

env | grep FSPULSE_

Docker:

docker exec fspulse env | grep FSPULSE_

Docker Configuration

When running fsPulse in Docker, configuration is managed slightly differently. The config file lives at /data/config.toml inside the container, and you have several options for customizing settings.

For step-by-step instructions on configuring fsPulse in Docker, including editing config files and using environment variables, see the Docker Deployment - Configuration section.


New Settings and Restoring Defaults

fsPulse may expand its configuration options over time. When new settings are introduced, they won’t automatically appear in your existing config.toml. To take advantage of new options, either:

  • Manually add new settings to your config file
  • Delete the file to allow fsPulse to regenerate it with all current defaults

Validators

fsPulse can optionally validate file contents during the analysis phase of a scan. To enable validation, configure it in the web UI when setting up or initiating a scan.

Validation allows fsPulse to go beyond basic metadata inspection and attempt to decode the file’s contents using format-specific logic. This helps detect corruption or formatting issues in supported file types.


Validation Status Codes

Each file in the database has an associated validation status (folders do not have a validation state):

Status CodeMeaning
UUnknown — item has never been included in a validation scan
VValid — most recent validation scan found no issues
IInvalid — validation failed; see validation_error field
NNo Validator — fsPulse does not currently support this file type

The validation_error field contains the error message returned by the validator only if the item was marked invalid. This field is empty for valid items or items with no validator.

Note: Some validation “errors” surfaced by the underlying libraries may not indicate corruption, but rather unsupported edge cases or metadata formatting. Always review the error messages before assuming a file is damaged.


Supported Validators

fsPulse relies on external Rust crates for performing format-specific validation. We gratefully acknowledge the work of the developers behind these crates for making them available to the Rust community.

File TypesCrateLink
FLAC audio (.flac)claxonclaxon on GitHub
Images (.jpg, .jpeg, .png, .gif, .tiff, .bmp)imageimage on GitHub
PDF documents (.pdf)lopdflopdf on GitHub

Validation support may expand in future versions of fsPulse to cover additional file types such as ZIP archives, audio metadata, or XML/JSON files.


$1 See the Query Syntax page for full details on query clauses and supported filters.

Advanced Topics

This section covers advanced concepts and technical details about fsPulse’s internal operation.

Contents

These topics are useful for developers, contributors, or users who want to understand fsPulse’s architecture more deeply.

Concepts

fsPulse tracks the state of your filesystem over time using a temporal versioning model. The core entities — roots, scans, items, item versions, hash versions, schedules, and tasks — form a layered model for understanding how your data evolves.


Root

A root is the starting point for a scan. It represents a specific path on the filesystem that you tell fsPulse to track.

  • Paths are stored as absolute paths.
  • Each root has a unique ID.
  • You can scan a root multiple times over time.

Scan

A scan is a snapshot of a root directory at a specific point in time.

Each scan records:

  • The time the scan was started and ended
  • Whether hashing and validation were enabled
  • Counts of files, folders, and total size discovered
  • Counts of additions, modifications, and deletions detected
  • Counts of files in each validation state and hash state
  • Counts of new integrity issues (suspect hashes and validation failures) detected

Scans are always tied to a root via root_id and are ordered chronologically by started_at.


Item

An item represents the stable identity of a single file or folder discovered during scanning. The items table stores only identity information:

  • Root
  • Path
  • Name (last path segment)
  • Type (File, Directory, Symlink, or Unknown)

An item’s mutable state — metadata, hash, validation — is stored in item versions, not in the item row itself.


Item Version

An item version captures the full known state of an item at a point in time. fsPulse uses temporal versioning: instead of maintaining one mutable row per item, the system stores one row per distinct state. A new version is created only when an item’s observable state changes.

Each version contains:

  • Temporal rangefirst_scan_id (when this state was first observed) and last_scan_id (last scan where it was confirmed)
  • Deletion status — whether the item existed or had been deleted
  • Access status — whether the item could be read successfully
  • Metadata — modification date and size
  • Validation state — format validation result and any error message (files only; null for folders)
  • Review timestampsval_reviewed_at and hash_reviewed_at record when a user acknowledged integrity issues on this version (files only)
  • Descendant change counts — add, modify, delete, and unchanged counts for child items (folders only; null for files)

An item that exists unchanged across 50 scans has exactly one version row. You never need to examine multiple versions to reconstruct the current state — each version is a complete snapshot.

Deriving Change Types

Change types are derived by comparing adjacent versions of an item:

  • Add: No previous version exists, or the previous version was a deletion
  • Delete: This version marks the item as deleted
  • Modify: Previous version exists, neither is deleted, and state differs

Hash Version

A hash version tracks the SHA-256 content hash of a file over time, bound to a specific item version. Hash observations are stored in a separate hash_versions table.

Each hash version records:

  • The hash value
  • A hash state: Baseline (first hash for a version, or hash change explained by metadata change) or Suspect (hash changed without a metadata change — possible bit rot or tampering)
  • A temporal range (first_scan_id to last_scan_id) indicating when this hash was observed

An item version can have zero hash observations (never hashed), one (stable hash), or multiple (hash changed between scans). See Scanning - Hash States for details.


Integrity Reviews

Integrity issues — suspect hashes and validation failures — are surfaced on the Integrity page. Users acknowledge issues by marking them as reviewed, which records a timestamp on the item version. Reviews are a lightweight acknowledgment mechanism tracked independently for hash and validation issues on each version.


Schedule

A schedule defines automatic recurring scans. Schedules specify:

  • Which root to scan
  • Timing (daily, weekly, monthly, or interval-based)
  • Scan options (hashing mode, validation mode)

Schedules can be enabled or disabled independently.


Task

A task is a unit of work in the execution queue. Tasks are created from manual scan requests or triggered by schedules. The Home page shows active and upcoming tasks; the History page shows completed tasks.

Tasks can be paused globally, and individual tasks can be stopped while in progress.


Entity Relationships

Root
 ├── Schedule (recurring scan configuration)
 ├── Scan (one per execution)
 └── Item (stable identity)
      └── Item Version (state at a point in time)
           └── Hash Version (hash observation over time)

These concepts form the foundation of fsPulse’s scanning, browsing, and query capabilities.

Database

fsPulse uses an embedded SQLite database to store all scan-related data. The database uses a temporal versioning model where item state is tracked through version rows rather than mutable updates.


Database Name and Location

The database file is always named:

fspulse.db

Data Directory

fsPulse uses a data directory to store application data including configuration, logs, and (by default) the database. The data directory location is determined by:

  1. FSPULSE_DATA_DIR environment variable (if set) - overrides the default location
  2. Platform-specific default - uses the directories crate’s project local directory:
PlatformValueExample
Linux$XDG_DATA_HOME/fspulse or $HOME/.local/share/fspulse/home/alice/.local/share/fspulse
macOS$HOME/Library/Application Support/fspulse/Users/Alice/Library/Application Support/fspulse
Windows{FOLDERID_LocalAppData}\fspulse\dataC:\Users\Alice\AppData\Local\fspulse\data
Docker/data/data

What’s stored in the data directory:

  • Configuration file (config.toml)
  • Log files (logs/)
  • Database file (fspulse.db) - by default

Note for Docker users: The data directory defaults to /data and can be overridden with FSPULSE_DATA_DIR, but this is generally not recommended since you can map any host directory or Docker volume to /data instead.

Default Database Location

By default, the database is stored in the data directory:

<data_dir>/fspulse.db

For example:

/home/alice/.local/share/fspulse/fspulse.db

Custom Database Location

If you need to store the database outside the data directory (for example, on a different volume or network share), you can override the database directory specifically:

Environment Variable:

export FSPULSE_DATABASE_DIR=/path/to/custom/directory
fspulse serve

Config File (config.toml):

[database]
dir = "/path/to/custom/directory"

In both cases, fsPulse will store the database as fspulse.db inside the specified directory. The filename cannot be changed — only the directory is configurable.

Database Location Precedence:

  1. FSPULSE_DATABASE_DIR environment variable (highest priority)
  2. [database].dir in config.toml
  3. Data directory (from FSPULSE_DATA_DIR or platform default)

Important: Configuration and logs always remain in the data directory, even when the database is moved to a custom location.

See the Configuration - Database Settings section for more details.


Schema Overview

The database schema reflects fsPulse’s temporal versioning model:

TablePurpose
rootsScanned root directories
scansIndividual scan executions with timing, settings, and summary statistics
itemsStable identity for each discovered file or folder (path, type, root)
item_versionsTemporal state — one row per distinct state of an item, with full metadata snapshot
hash_versionsHash observations — SHA-256 hashes bound to specific item versions, with integrity state
scan_schedulesRecurring scan configurations (timing, options)
tasksWork queue entries for scans and other operations
scan_undo_logTransient rollback support for in-progress scans

Temporal Versioning

The items table stores only identity information (root, path, name, type). All mutable state lives in item_versions, where each row represents a distinct state with a temporal range:

  • first_scan_id — the scan where this state was first observed
  • last_scan_id — the most recent scan where this state was confirmed

An item that remains unchanged across many scans has a single version row. A new version row is created only when observable state changes (metadata, access, or deletion status). Hash observations are stored separately in hash_versions, and validation results are written directly onto the version row.

Schema Versioning

The schema is versioned (currently version 29) and automatically migrated on startup. fsPulse handles all upgrades transparently — no manual migration steps are needed.


Database Compaction

Over time, deletions and updates can leave unused space in the database file. The Settings page provides a Compact Database action that reclaims this space by running SQLite’s VACUUM command.


Exploring the Database

Because fsPulse uses SQLite, you can inspect the database using any compatible tool, such as:

  • DB Browser for SQLite
  • The sqlite3 command-line tool
  • SQLite integrations in many IDEs and database browsers

⚠️ Caution: Making manual changes to the database may affect fsPulse’s behavior or stability. Read-only access is recommended.


fsPulse manages all internal data access automatically. Most users will not need to interact with the database directly.

Development

fsPulse is under active development, with regular improvements being made to both its functionality and documentation.

Contribution Policy

At this time, fsPulse is not open for public contribution. This may change in the future as the project matures and its architecture stabilizes.

If you’re interested in the project, you’re encouraged to:

  • Explore the source code on GitHub
  • Open GitHub issues to report bugs or request features
  • Follow the project for updates

Your interest and feedback are appreciated.


If contribution opportunities open in the future, setup instructions and contribution guidelines will be added to this page.

License

fsPulse is released under the MIT License.

You are free to use, modify, and distribute the software under the terms of this license.

Full License Text

MIT License

Copyright (c) 2025 gtunes-dev

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

See the full LICENSE file in the repository.

Acknowledgements

fsPulse relies on several open source Rust crates. We gratefully acknowledge the work of these maintainers, particularly for enabling file format validation.

File Format Validation

The following libraries enable fsPulse’s ability to detect corrupted files:

  • claxon — FLAC audio decoding and validation
  • image — Image format decoding for JPG, PNG, GIF, TIFF, BMP
  • lopdf — PDF parsing and validation

See Validators for the complete list of supported file types.

Additional Dependencies

fsPulse wouldn’t be possible without the incredible open source ecosystem it’s built upon:

Web Interface:

Backend:

  • rusqlite — SQLite database interface
  • axum — Web framework
  • tokio — Async runtime
  • clap — Command-line argument parsing
  • figment — Configuration management
  • flexi_logger — Flexible logging
  • pest — Parser generator (for query language)

The complete list of dependencies is available in the project’s Cargo.toml and package.json.


Thank you to all the open source maintainers whose work makes fsPulse possible.