Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Introduction

FsPulse logo

Read-Only Guarantee. FsPulse never modifies your files. It requires only read access to the directories you configure for scanning. Write access is required only for FsPulse’s own database, configuration files, and logs — never for your data.

Local-Only Guarantee. FsPulse makes no outbound network requests. All functionality runs entirely on your local system, with no external dependencies or telemetry.

What is FsPulse?

FsPulse is a comprehensive filesystem monitoring and integrity tool that gives you complete visibility into your critical directories. Track your data as it grows and changes over time, detect unexpected modifications, and catch silent threats like bit rot and corruption before they become disasters. FsPulse provides continuous awareness through automated scanning, historical trend analysis, and intelligent alerting.

Your filesystem is constantly evolving—files are added, modified, and deleted. Storage grows. But invisible problems hide beneath the surface: bit rot silently corrupts data, ransomware alters files while preserving timestamps, and you don’t realize directories have bloated.

FsPulse gives you continuous awareness of both the visible and invisible:

Monitor Change & Growth:

  • Track directory sizes and growth trends over time
  • Visualize file additions, modifications, and deletions
  • Understand what’s changing and when across all scans

Detect Integrity Issues:

  • Content Hashing (SHA2): Catches when file contents change even though metadata stays the same—the signature of bit rot or tampering
  • Format Validation: Reads and validates file structures to detect corruption in FLAC, JPEG, PNG, PDF, and more

Whether you’re managing storage capacity, tracking project evolution, or ensuring data integrity, FsPulse provides the visibility and peace of mind that comes from truly knowing the state of your data.

FsPulse Web UI - Real-time Scan Monitoring
Web UI showing real-time scan progress with live statistics

Key Capabilities

  • Continuous Monitoring — Schedule recurring scans (daily, weekly, monthly, or custom intervals) to track your filesystem automatically
  • Size & Growth Tracking — Monitor directory sizes and visualize storage trends over time with dual-format units
  • Change Detection — Track all file additions, modifications, and deletions with complete historical records
  • Integrity Verification — SHA2 hashing detects bit rot and tampering; format validators catch corruption in supported file types
  • Historical Analysis — Interactive trend charts show how your data evolves: sizes, counts, changes, and alerts
  • Alert System — Suspicious hash changes and validation failures flagged immediately with status management
  • Powerful Query Language — SQL-inspired syntax for filtering, sorting, and analyzing your filesystem data
  • Web-First Design — Elegant web UI for all operations including scanning, browsing, querying, and configuration

Running FsPulse

FsPulse is a web-first application. Start the server and access all functionality through your browser:

fspulse

Then open http://localhost:8080 in your browser to access the web interface.

The web UI provides complete functionality for managing roots, scheduling and monitoring scans, browsing your filesystem data, running queries, and managing alerts. Configuration is done through environment variables or a config file—see Configuration for details.

FsPulse is designed to scale across large file systems while maintaining clarity and control for the user.


This book provides comprehensive documentation for all aspects of FsPulse. Start with Getting Started for installation, or jump to any section that interests you.

Getting Started

FsPulse can be installed in one of four ways:

  1. Run with Docker (Recommended)
  2. Install via crates.io
  3. Clone and build from source
  4. Download a pre-built release binary from GitHub

Choose the method that works best for your platform and preferences.


The easiest way to run FsPulse is with Docker:

docker pull gtunesdev/fspulse:latest

docker run -d \
  --name fspulse \
  -p 8080:8080 \
  -v fspulse-data:/data \
  gtunesdev/fspulse:latest

Access the web UI at http://localhost:8080

The web UI provides full functionality: managing roots, initiating scans, querying data, and viewing results—all from your browser.

See the Docker Deployment chapter for complete documentation including:

  • Volume management for scanning host directories
  • Configuration options
  • Docker Compose examples
  • NAS deployment (TrueNAS, Unraid)
  • Troubleshooting

2. Install via Crates.io

The easiest way to get FsPulse is via crates.io:

cargo install fspulse

This will download, compile, and install the latest version of FsPulse into Cargo’s bin directory, typically ~/.cargo/bin. That directory is usually already in your PATH. If it’s not, you may need to add it manually.

Then run:

fspulse --help

To upgrade to the latest version later:

cargo install fspulse --force

3. Clone and Build from Source

If you prefer working directly with the source code (for example, to contribute or try out development versions):

git clone https://github.com/gtunes-dev/fspulse.git
cd fspulse
cargo build --release

Then run it from the release build directory:

./target/release/fspulse --help

4. Download Pre-Built Release Binaries

Pre-built release binaries for Linux, macOS, and Windows are available on the GitHub Releases page:

  1. Visit the releases page.
  2. Download the appropriate archive for your operating system.
  3. Unpack the archive.
  4. Optionally move the fspulse binary to a directory included in your PATH.

For example, on Unix systems:

mv fspulse /usr/local/bin/

Then confirm it’s working:

fspulse --help

Running FsPulse

After installation, start the FsPulse server:

fspulse

Or explicitly:

fspulse serve

Then open your browser to http://localhost:8080 to access the web interface.

FsPulse is a web-first application. All functionality is available through the web UI:

  • Root management (create, view, delete roots)
  • Scan scheduling and initiation with real-time progress
  • Interactive data browsing and exploration
  • Powerful query interface
  • Alert management

Configuration

FsPulse is configured through environment variables or a config file, not command-line flags:

# Example: Change port and enable debug logging
export FSPULSE_SERVER_PORT=9090
export FSPULSE_LOGGING_FSPULSE=debug
fspulse

See Configuration for all available settings and the Command-Line Interface page for more details.

Installation

FsPulse can be installed in several ways depending on your preferences and environment.

Pull the official image and run:

docker pull gtunesdev/fspulse:latest
docker run -d --name fspulse -p 8080:8080 -v fspulse-data:/data gtunesdev/fspulse:latest

Multi-architecture support: linux/amd64, linux/arm64

See Docker Deployment for complete instructions.

Cargo (crates.io)

Install via Rust’s package manager:

cargo install fspulse

Requires Rust toolchain installed on your system.

Pre-built Binaries

Download platform-specific binaries from GitHub Releases.

Available for: Linux, macOS, Windows

macOS builds include both Intel (x86_64) and Apple Silicon (ARM64) binaries.

Note: All web UI assets are embedded in the binary—no external files or dependencies required.

Next Steps

Building from Source

This guide covers building FsPulse from source code, which is useful for development, customization, or running on platforms without pre-built binaries.

Prerequisites

Before building FsPulse, ensure you have the following installed:

Required Tools

  1. Rust (latest stable version)

    • Install via rustup
    • Verify: cargo --version
  2. Node.js (v18 or later) with npm

    • Install from nodejs.org
    • Verify: node --version and npm --version

Platform-Specific Requirements

Windows:

  • Visual Studio Build Tools or Visual Studio with C++ development tools
  • Required for SQLite compilation

Linux:

  • Build essentials: build-essential (Ubuntu/Debian) or base-devel (Arch)
  • May need pkg-config and libsqlite3-dev depending on distribution

macOS:

  • Xcode Command Line Tools: xcode-select --install

The easiest way to build FsPulse is using the provided build script:

git clone https://github.com/gtunes-dev/fspulse.git
cd fspulse
./scripts/build.sh

The script will:

  1. Check for required tools
  2. Install frontend dependencies
  3. Build the React frontend
  4. Compile the Rust binary with embedded assets

The resulting binary will be at: ./target/release/fspulse

Manual Build

If you prefer to run each step manually or need more control:

Step 1: Clone the Repository

git clone https://github.com/gtunes-dev/fspulse.git
cd fspulse

Step 2: Build the Frontend

cd frontend
npm install
npm run build
cd ..

This creates the frontend/dist/ directory containing the compiled React application.

Step 3: Build the Rust Binary

cargo build --release

The binary will be at: ./target/release/fspulse

Important: The frontend must be built before compiling the Rust binary. The web UI assets are embedded into the binary at compile time via RustEmbed. If frontend/dist/ doesn’t exist, the build will fail with a helpful error message.

Development Builds

For development, you can skip the release optimization:

# Frontend (development mode with hot reload)
cd frontend
npm run dev

# Backend (in a separate terminal)
cargo run -- serve

In development mode, the backend serves frontend files directly from frontend/dist/ rather than using embedded assets, allowing for faster iteration.

Troubleshooting

“Frontend assets not found”

Error:

❌ ERROR: Frontend assets not found at 'frontend/dist/'

Solution: Build the frontend first:

cd frontend
npm install
npm run build
cd ..

Windows: “link.exe not found”

Error: Missing Visual Studio Build Tools

Solution: Install Visual Studio Build Tools with C++ development tools from visualstudio.microsoft.com

Linux: “cannot find -lsqlite3”

Error: Missing SQLite development libraries

Solution: Install platform-specific package:

  • Ubuntu/Debian: sudo apt-get install libsqlite3-dev
  • Fedora: sudo dnf install sqlite-devel
  • Arch: sudo pacman -S sqlite

npm install fails

Error: Network or permission issues with npm

Solution:

  • Clear npm cache: npm cache clean --force
  • Check Node.js version: node --version (should be v18+)
  • Try with sudo (not recommended) or fix npm permissions

Running Your Build

After building, run FsPulse:

./target/release/fspulse --help
./target/release/fspulse serve

Access the web UI at: http://localhost:8080

Next Steps

Docker Deployment

The easiest way to run FsPulse is with Docker. The container runs FsPulse as a background service with the web UI accessible on port 8080. You can manage roots, initiate scans, query data, and view results—all from your browser.


Quick Start

Get FsPulse running in three simple steps:

# 1. Pull the image
docker pull gtunesdev/fspulse:latest

# 2. Run the container
docker run -d \
  --name fspulse \
  -p 8080:8080 \
  -v fspulse-data:/data \
  gtunesdev/fspulse:latest

# 3. Access the web UI
open http://localhost:8080

That’s it! The web UI is now running.

This basic setup stores all FsPulse data (database, config, logs) in a Docker volume and uses default settings. If you need to customize settings (like running as a specific user for NAS deployments, or changing the port), see the Configuration and NAS Deployments sections below.


Scanning Your Files

To scan directories on your host machine, you need to mount them into the container. FsPulse can then scan these mounted paths.

Mounting Directories

Add -v flags to mount host directories into the container. We recommend mounting them under /roots for clarity:

docker run -d \
  --name fspulse \
  -p 8080:8080 \
  -v fspulse-data:/data \
  -v ~/Documents:/roots/documents:ro \
  -v ~/Photos:/roots/photos:ro \
  gtunesdev/fspulse:latest

The :ro (read-only) flag is recommended for safety—FsPulse only reads files during scans and never modifies them.

Creating Roots in the Web UI

After mounting directories:

  1. Open http://localhost:8080 in your browser
  2. Navigate to Manage Roots in the sidebar
  3. Click Add Root
  4. Enter the container path: /roots/documents (not the host path ~/Documents)
  5. Click Create Root

Important: Always use the container path (e.g., /roots/documents), not the host path. The container doesn’t know about host paths.

Once roots are created, you can scan them from the web UI and monitor progress in real-time.


For persistent deployments, Docker Compose is cleaner and easier to manage:

version: '3.8'

services:
  fspulse:
    image: gtunesdev/fspulse:latest
    container_name: fspulse
    restart: unless-stopped
    ports:
      - "8080:8080"
    volumes:
      # Persistent data storage - REQUIRED
      # Must map /data to either a Docker volume (shown here) or a host path
      # Must support read/write access for database, config, and logs
      - fspulse-data:/data

      # Alternative: use a host path instead
      # - /path/on/host/fspulse-data:/data

      # Directories to scan (read-only recommended for safety)
      - ~/Documents:/roots/documents:ro
      - ~/Photos:/roots/photos:ro
    environment:
      # Optional: override any configuration setting
      # See Configuration section below and https://gtunes-dev.github.io/fspulse/configuration.html
      - TZ=America/New_York

volumes:
  fspulse-data:

Save as docker-compose.yml and run:

docker-compose up -d

Configuration

FsPulse creates a default config.toml on first run with sensible defaults. Most users won’t need to change anything, but when you do, there are three ways to customize settings.

Option 1: Use Environment Variables (Easiest)

Override any setting using environment variables. This works with both docker run and Docker Compose.

Docker Compose example:

services:
  fspulse:
    image: gtunesdev/fspulse:latest
    environment:
      - FSPULSE_SERVER_PORT=9090      # Change web UI port
      - FSPULSE_LOGGING_FSPULSE=debug # Enable debug logging
      - FSPULSE_ANALYSIS_THREADS=16   # Use 16 analysis threads
    ports:
      - "9090:9090"

Command line example (equivalent to above):

docker run -d \
  --name fspulse \
  -p 9090:9090 \
  -e FSPULSE_SERVER_PORT=9090 \
  -e FSPULSE_LOGGING_FSPULSE=debug \
  -e FSPULSE_ANALYSIS_THREADS=16 \
  -v fspulse-data:/data \
  gtunesdev/fspulse:latest

Environment variables follow the pattern FSPULSE_<SECTION>_<FIELD> and override any settings in config.toml. See the Configuration chapter for a complete list of available variables and their purposes.

Option 2: Edit the Config File

If you prefer editing the config file directly:

  1. Extract the auto-generated config:

    docker exec fspulse cat /data/config.toml > config.toml
    
  2. Edit config.toml with your preferred settings

  3. Copy back and restart:

    docker cp config.toml fspulse:/data/config.toml
    docker restart fspulse
    

Option 3: Pre-Mount Your Own Config (Advanced)

If you want custom settings before first launch, create your own config.toml and mount it:

volumes:
  - fspulse-data:/data
  - ./my-config.toml:/data/config.toml:ro

Most users should start with Option 1 (environment variables) or Option 2 (edit after first run).


NAS Deployments (TrueNAS, Unraid)

NAS systems often have specific user IDs for file ownership. By default, FsPulse runs as user 1000, but you may need it to match your file ownership.

Setting User and Group IDs

Use PUID and PGID environment variables to run FsPulse as a specific user:

TrueNAS Example (apps user = UID 34):

docker run -d \
  --name fspulse \
  -p 8080:8080 \
  -e PUID=34 \
  -e PGID=34 \
  -e TZ=America/New_York \
  -v /mnt/pool/fspulse/data:/data \
  -v /mnt/pool/documents:/roots/docs:ro \
  gtunesdev/fspulse:latest

Unraid Example (custom UID 1001):

docker run -d \
  --name fspulse \
  -p 8080:8080 \
  -e PUID=1001 \
  -e PGID=100 \
  -v /mnt/user/appdata/fspulse:/data \
  -v /mnt/user/photos:/roots/photos:ro \
  gtunesdev/fspulse:latest

Why PUID/PGID Matters

Even though you mount directories as read-only (:ro), Linux permissions still apply. If your files are owned by UID 34 and aren’t world-readable, FsPulse (running as UID 1000 by default) won’t be able to scan them. Setting PUID=34 makes FsPulse run as the same user that owns the files.

When to use PUID/PGID:

  • Files have restrictive permissions (not world-readable)
  • Using NAS with specific user accounts (TrueNAS, Unraid, Synology)
  • You need the /data directory to match specific host ownership

Advanced Topics

Custom Network Settings

If you’re using macvlan or host networking, ensure the server binds to all interfaces:

services:
  fspulse:
    image: gtunesdev/fspulse:latest
    network_mode: host
    environment:
      - FSPULSE_SERVER_HOST=0.0.0.0  # Required for non-bridge networking
      - FSPULSE_SERVER_PORT=8080

Note: The Docker image sets FSPULSE_SERVER_HOST=0.0.0.0 by default, so this is only needed if your config.toml overrides it to 127.0.0.1.

Reverse Proxy Setup

For public access with authentication, use a reverse proxy like nginx:

server {
    listen 80;
    server_name fspulse.example.com;

    location / {
        proxy_pass http://localhost:8080;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;

        # WebSocket support for scan progress
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
    }
}

Using Bind Mounts Instead of Volumes

By default, we use Docker volumes (-v fspulse-data:/data) which Docker manages automatically. For NAS deployments, you might prefer bind mounts to integrate with your existing backup schemes:

# Create directory on host
mkdir -p /mnt/pool/fspulse/data

# Use bind mount
docker run -d \
  --name fspulse \
  -p 8080:8080 \
  -v /mnt/pool/fspulse/data:/data \  # Bind mount to host path
  gtunesdev/fspulse:latest

Benefits of bind mounts for NAS:

  • Included in your NAS snapshot schedule
  • Backed up with your existing backup system
  • Directly accessible for manual inspection

Trade-off: You need to manage permissions yourself (use PUID/PGID if needed).


Troubleshooting

Cannot Access Web UI

Problem: http://localhost:8080 doesn’t respond

Solutions:

  1. Check the container is running:

    docker ps | grep fspulse
    
  2. Check logs for errors:

    docker logs fspulse
    

    Look for “Server started” message.

  3. Verify port mapping:

    docker port fspulse
    

    Should show 8080/tcp -> 0.0.0.0:8080

Permission Denied Errors

Problem: “Permission denied” when scanning or accessing /data

Solutions:

  1. Check file ownership:

    ls -ln /path/to/your/files
    
  2. Set PUID/PGID to match file owner:

    docker run -e PUID=1000 -e PGID=1000 ...
    
  3. For bind mounts, ensure host directory is writable:

    chown -R 1000:1000 /mnt/pool/fspulse/data
    

Configuration Changes Don’t Persist

Problem: Settings revert after container restart

Solution: Verify /data volume is mounted:

docker inspect fspulse | grep -A 10 Mounts

If missing, recreate container with volume:

docker stop fspulse
docker rm fspulse
docker run -d --name fspulse -v fspulse-data:/data ...

Database Locked Errors

Problem: “Database is locked” errors

Cause: Multiple containers accessing the same database

Solution: Only run one FsPulse container per database. Don’t mount the same /data volume to multiple containers.


Data Backup

Backing Up Your Data

For Docker volumes:

# Stop container
docker stop fspulse

# Backup volume
docker run --rm \
  -v fspulse-data:/data \
  -v $(pwd):/backup \
  alpine tar czf /backup/fspulse-backup.tar.gz /data

# Restart container
docker start fspulse

For bind mounts:

# Simply backup the host directory
tar czf fspulse-backup.tar.gz /mnt/pool/fspulse/data

Restoring from Backup

For Docker volumes:

# Create volume
docker volume create fspulse-data-restored

# Restore data
docker run --rm \
  -v fspulse-data-restored:/data \
  -v $(pwd):/backup \
  alpine sh -c "cd / && tar xzf /backup/fspulse-backup.tar.gz"

# Use restored volume
docker run -d --name fspulse -v fspulse-data-restored:/data ...

For bind mounts:

tar xzf fspulse-backup.tar.gz -C /mnt/pool/fspulse/data

Image Tags and Updates

FsPulse provides multiple tags for different update strategies:

TagDescriptionWhen to Use
latestLatest stable releaseProduction (pinned versions)
1.2.3Specific versionProduction (exact control)
1.2Latest patch of minor versionProduction (auto-patch updates)
mainDevelopment buildsTesting new features

Recommendation: Use specific version tags (1.2.3) or minor version tags (1.2) for production. Avoid latest in production to prevent unexpected updates.

Updating to a new version:

docker pull gtunesdev/fspulse:1.2.3
docker stop fspulse
docker rm fspulse
docker run -d --name fspulse -v fspulse-data:/data ... gtunesdev/fspulse:1.2.3

Your data persists in the volume across updates.


Platform Support

FsPulse images support multiple architectures—Docker automatically pulls the correct one for your platform:

  • linux/amd64 - Intel/AMD processors (most common)
  • linux/arm64 - ARM processors (Apple Silicon, Raspberry Pi 4, ARM servers)

Next Steps


Getting Help

First Steps

This guide walks you through your first scan and basic usage of FsPulse.

Starting the Web Interface

Launch FsPulse:

fspulse

Open your browser to http://127.0.0.1:8080

Adding Your First Scan Root

  1. Navigate to the Monitor page
  2. Click Add Root
  3. Enter the path to the directory you want to monitor
  4. Configure initial scan settings

Running Your First Scan

  1. From the Monitor page, click Scan Now for your newly added root
  2. Watch the real-time progress on the Home page
  3. Once complete, explore the results

Exploring Your Data

After your first scan completes:

  • Browse — Navigate your filesystem hierarchy
  • Insights — View charts and trends (requires multiple scans)
  • Alerts — Check for any validation issues detected
  • Explore — Run queries against your scan data

Setting Up Scheduled Scans

  1. Navigate to the Monitor page
  2. Click Add Schedule
  3. Select your root and configure:
    • Schedule type (daily, weekly, monthly, interval)
    • Time/day settings
    • Scan options (hashing, validation)
  4. Save the schedule

Scheduled scans will run automatically based on your configuration.

Next Steps

Scanning

FsPulse scans are at the core of how it tracks changes to the file system over time. A scan creates a snapshot of a root directory and analyzes changes compared to previous scans. This page explains how to initiate scans, how incomplete scans are handled, and the phases involved in each scan.


Initiating a Scan

FsPulse runs as a web service, and scans are initiated through the web UI:

  1. Start the server: fspulse serve
  2. Open http://localhost:8080 in your browser (or the custom port you’ve configured)
  3. Navigate to the Monitor page
  4. Configure scan options (hashing, validation)
  5. Start the scan and monitor real-time progress on the Home page

The web UI supports both scheduled automatic scans and manual on-demand scans. You can create recurring schedules (daily, weekly, monthly, or custom intervals) or initiate individual scans as needed. See Monitor for details.

Once a scan on a root has begun, it must complete or be explicitly stopped before another scan on the same root can be started. Scans on different roots can run independently.

Note: FsPulse is designed with the web UI as the primary interface for all users. A command-line interface is also available for expert users who prefer terminal workflows or need to integrate FsPulse into scripts and automation, but all functionality is accessible through the web UI.


Hashing

Hashing is a key capabilities of FsPulse.

FsPulse uses the standard SHA2 (256) message-digest algorithm to compute digital fingerprints of file contents. The intent of hashing is to enable the detection of changes to file content in cases where the modification date and file size have not changed. One example of a case where this might occur is bit rot (data decay).

When configuring a scan in the web UI, you can enable hashing with these options:

  • Hash changed items (default): Compute hashes for items that have never been hashed or whose file size or modification date has changed
  • Hash all items: Hash all files, including those that have been previously hashed

If a hash is detected to have changed, a change record is created and an alert is generated (see Alerts).

Finding Hash Changes

You can investigate hash changes through the web UI:

  • Alerts Page: Shows suspicious hash changes where file metadata hasn’t changed
  • Explore Page: Use the Query tab to run custom queries

Example query to find hash changes without metadata changes (run on the Explore page’s Query tab):

changes where meta_change:(F), hash_change:(T) show default, item_path order by change_id desc

Validating

FsPulse can attempt to assess the “validity” of files.

FsPulse uses community-contributed libraries to “validate” files. Validation is implemented as opening and reading or traversing the file. These community libraries raise a variety of “errors” when invalid content is encountered.

FsPulse’s ability to validate files is limited to the capabilities of the libraries that it uses, and these libraries vary in terms of completeness and accuracy. In some cases, such as FsPulse’s use of lopdf to validate PDF files, false positive “errors” may be detected as a consequence of lopdf encountering PDF file contents it does not yet understand. Despite these limitations, FsPulse offers a unique and effective view into potential validity issues in files.

See Validators for the complete list of supported file types.

When configuring a scan in the web UI, you can enable validation with these options:

  • Validate changed items (default): Validate files that have never been validated or have changed in terms of modification date or size
  • Validate all items: Validate all files regardless of previous validation status

Validation States

Validation states are stored in the database as:

  • U: Unknown. No validation has been performed
  • N: No Validator. No validator exists for this file type
  • V: Valid. Validation was performed and no errors were encountered
  • I: Invalid. Validation was performed and an error was encountered

In the case of ‘I’ (Invalid), the validation error message is stored alongside the validation state. When an item’s validation state changes, the change is recorded and both old and new states are available for analysis.

If a validation pass produces an error identical to a previously seen error, no new change is recorded.

Finding Validation Issues

Invalid items are automatically flagged as alerts. You can investigate validation failures through the web UI:

  • Alerts Page: Shows all items with validation failures, with filtering and status management
  • Browse Page: Click any item to see its validation status and error details
  • Explore Page: Use the Query tab to run custom queries

Example query to find validation state changes (run on the Explore page’s Query tab):

changes where val_change:(T) show default, item_path order by change_id desc

Additional queries can filter on specific old and new validation states. See Query Syntax for details.


In-Progress Scans

FsPulse is designed to be resilient to interruptions like system crashes or power loss. If a scan stops before completing, FsPulse saves its state so it can be resumed later.

When you attempt to start a new scan on a root that has an in-progress scan, the web UI will prompt you to:

  • Resume the scan from where it left off
  • Stop the scan and discard its partial results

Stopping a scan reverts the database to its pre-scan state. All detected changes, computed hashes, and validations from that partial scan will be discarded.


Phases of a Scan

Each scan proceeds in three main phases:

1. Discovery

The directory tree is deeply traversed. For each file or folder encountered:

  • If not seen before:
    • A new item is created
    • An Add change is recorded
  • If seen before:
    • FsPulse compares current file system metadata:
      • Modification date (files and folders)
      • File size (files only)
    • If metadata differs, the item is updated and a Modify change is recorded
  • If the path matches a tombstoned item (previously deleted):
    • If type matches (file/folder), the tombstone is reactivated and an Add change is created
    • If type differs, FsPulse creates a new item and new Add change

Files and folders are treated as distinct types. A single path that appears as both a file and folder at different times results in two separate items.


2. Sweep

FsPulse identifies items not seen during the current scan:

  • Any item that:
    • Is not a tombstone, and
    • Was not visited in the scan

…is marked as a tombstone, and a Delete change is created.

Moved files appear as deletes and adds, as FsPulse does not yet track move operations.


3. Analysis

This phase runs only if hashing and/or validation is enabled when configuring the scan (see Hashing and Validating above).

  • Hashing — Computes a SHA2 hash of file contents
  • Validation — Uses file-type-specific validators to check content integrity (see Validators)

If either the hash or validation result changes:

  • If an Add or Modify change already exists, the new data is attached to it
  • Otherwise, a new Modify change is created

Each change record stores both the old and new values for comparison, allowing you to track exactly what changed.


Performance and Threading

The analysis phase runs in parallel:


Summary of Phases

PhasePurpose
DiscoveryFinds and records new or modified items
SweepMarks missing items as tombstones and records deletions
AnalysisComputes hashes/validations and records changes if values differ

Each scan provides a consistent view of the file system at a moment in time and captures important differences across revisions.

Interface

The FsPulse interface provides a comprehensive visual environment for monitoring your filesystems, managing scans, analyzing trends, and investigating issues.

Overview

Access the interface by running:

fspulse

By default, the interface is available at http://127.0.0.1:8080. You can customize the host and port through configuration or environment variables (see Configuration).

Key Features

The web interface includes the following pages:

  • Scans — Dashboard showing scan status and history
  • Monitor — Configure automatic scans and manage scan roots
  • Browse — Navigate filesystem hierarchy with detailed item inspection
  • Alerts — Manage integrity issues and validation failures
  • Insights — Visualize historical data with interactive charts
  • Explore — Query interface for advanced data analysis

Live Updates

The web interface uses WebSocket connections to provide real-time updates during scan operations. When a scan is running (whether manually initiated or scheduled), you can watch progress updates, statistics, and phase transitions as they happen.

The left sidebar provides access to all major sections. On smaller screens, the sidebar collapses into a hamburger menu for better mobile experience.

Scans

The Scans page serves as your central dashboard, providing an at-a-glance view of scan status and history.

Features

Active Scan Monitoring

When a scan is running (manually initiated or scheduled), the Scans page displays:

  • Real-time progress indicators
  • Current scan phase (Discovery, Hash, Validation, Analysis)
  • Statistics: files/folders processed, sizes calculated, changes detected
  • Live updates via WebSocket connection

Recent Scan Results

For completed scans, the Scans page shows:

  • Scan completion time
  • Total items scanned
  • Change summary (additions, modifications, deletions)
  • Alert count (validation failures, suspicious hash changes)
  • Storage metrics

Quick Access

From the Home page, you can quickly navigate to:

  • Browse the scanned filesystem
  • View detailed scan reports
  • Investigate alerts
  • Configure new scans

Use Cases

  • Monitoring: Keep the Home page open to watch long-running scans
  • Status Check: Quick view of your most recent scan activity
  • Alert Awareness: Immediate visibility into any detected issues

Monitor

The Monitor page is your control center for managing scan roots, scheduling automatic scans, and viewing the scan queue.

Monitor Page

Managing Scan Roots

Adding a Root

  1. Click Add Root
  2. Enter the full filesystem path to monitor
  3. Optionally provide a friendly name
  4. Save

Managing Roots

  • Scan Now: Trigger an immediate one-time scan
  • Delete: Remove the root (also removes associated schedules and queue entries)
  • View root statistics and last scan time

Scheduling Automatic Scans

FsPulse supports flexible scheduling options for automated monitoring:

Schedule Types

  • Daily: Run at a specific time each day
  • Weekly: Run on specific days of the week at a chosen time
  • Monthly: Run on a specific day of the month
  • Interval: Run every N hours/minutes

Creating a Schedule

  1. Click Add Schedule
  2. Select the root to scan
  3. Choose schedule type and timing
  4. Configure scan options:
    • Enable hashing (default: all files)
    • Enable validation (default: new/changed files)
  5. Save

Schedule Management

  • Enable/Disable: Temporarily pause schedules without deleting them
  • Edit: Modify timing or scan options
  • Delete: Remove the schedule

Scan Queue

The queue shows:

  • Pending scheduled scans waiting to execute
  • Currently running scans
  • Recent scan history

Scans are queued and executed sequentially to prevent resource conflicts.

Configuration

Scheduling and queue behavior can be customized via Configuration.

Browse

The Browse page provides an intuitive interface for navigating your filesystem hierarchy and inspecting individual items in detail.

Filesystem Tree

Navigate your scanned directories with a hierarchical tree view showing:

  • Folders and files from your most recent scan
  • Item counts and sizes
  • Visual indicators for items with alerts

Browse Filesystem Tree

Features

  • Search: Filter items by path using the search box
  • Expand/Collapse: Navigate the folder structure
  • Sort: Order by name, size, or modification time
  • Root Selection: Switch between different scan roots

Item Detail Panel

Click any item to open the detail panel, which provides comprehensive information:

Item Detail Panel

Metadata

  • File/folder size (dual format: decimal and binary units)
  • Modification time
  • Item type
  • Current hash (if hashed)

Validation Status

  • Validation state (Valid, Invalid, NotValidated)
  • Validation error details (if any)
  • Last validation scan information

Change History

  • Change type across scans (Add, Modify, Delete, NoChange)
  • Hash changes detected
  • Modification timeline

Associated Alerts

  • Suspicious hash changes
  • Validation failures
  • Alert status and timestamps

Use Cases

  • Investigation: Drill down into specific files or folders when alerts are triggered
  • Verification: Check hash and validation status of critical files
  • Analysis: Understand what changed between scans
  • Navigation: Visual exploration of your monitored filesystems

Alerts

The Alerts page provides a centralized view for managing integrity issues detected during scans.

Alert Types

FsPulse generates three types of alerts:

Access Denied

Triggered when FsPulse is unable to access an item or folder. These alerts can occur during either the scan phase or the analysis phase:

During Scan Phase:

  • Unable to retrieve item metadata (type, size, or modification date)
  • Unable to enumerate folder contents (typically due to permission restrictions)

During Analysis Phase:

  • Unable to read a file for hashing or validation

Notes:

  • If FsPulse cannot determine an item’s type from metadata, the item is recorded as an instance of the “Unknown” type
  • Items with failed metadata retrieval, whether “Unknown” or otherwise, are not examined during the analysis phase

Suspicious Hash Changes

Triggered when:

  • A file’s hash changes between scans
  • The file’s modification time has NOT changed

This pattern indicates potential:

  • Bit rot (silent data corruption)
  • Tampering or malicious modification
  • Filesystem anomalies

Invalid Items

Triggered when format validation fails:

  • FLAC audio files with invalid structure
  • JPEG/PNG images that fail format checks
  • PDF files with corruption
  • Other validated file types with detected issues

See Validators for details on supported file types.

Alert Status

Each alert can be in one of three states:

  • Open: New alert requiring attention
  • Flagged: Marked for follow-up or further investigation
  • Dismissed: Reviewed and determined to be non-critical

Managing Alerts

Filtering

Filter alerts by:

  • Status (Open/Flagged/Dismissed)
  • Alert type (Access denied/Hash change/Validation failure)
  • Root
  • Time range
  • Path search

Status Actions

  • Flag: Mark alert for follow-up
  • Dismiss: Acknowledge and close the alert
  • Reopen: Change dismissed alert back to open

Batch Operations

Select multiple alerts to update status in bulk.

Alert Details

Click an alert to view:

  • Item path and metadata
  • Alert timestamp
  • Access error details (for access denied alerts)
  • Change details (for hash changes)
  • Validation error message (for invalid items)
  • Link to item in Browse view

Integration with Browse

Alerts are also displayed in the Browse page’s item detail panel, providing context when investigating specific files.

Workflow Recommendations

  1. Review Open Alerts: Check new alerts regularly
  2. Investigate: Use Browse to examine affected items
  3. Triage: Flag important issues, dismiss false positives
  4. Restore: Use backups to restore corrupted files if needed
  5. Track: Monitor alert trends in Insights

Insights

The Insights page provides interactive visualizations showing how your data evolves over time across multiple scans.

Insights Trend Charts

Available Charts

Track total storage usage over time:

  • See growth or reduction in directory sizes
  • Identify storage bloat
  • Displayed in both decimal (GB) and binary (GiB) units

Monitor the number of items:

  • Total files and folders over time
  • Detect unexpected additions or deletions
  • Separate trend lines for files vs. directories

Change Activity

Visualize filesystem activity:

  • Additions, modifications, and deletions per scan
  • Identify periods of high change
  • Understand modification patterns

Track integrity issues over time:

  • Validation failures
  • Suspicious hash changes
  • Alert resolution patterns

Features

Root Selection

Select which scan root to analyze from the dropdown. Each root maintains independent trend data.

Date Range Filtering

Customize the time window:

  • Last 7 days
  • Last 30 days
  • Last 90 days
  • All time
  • Custom range

Baseline Exclusion

Toggle whether to include the initial (baseline) scan in trend calculations. Baseline scans often show large numbers of “additions” which can skew trend visualizations.

Interactive Charts

  • Hover for detailed values
  • Pan and zoom on time ranges
  • Toggle data series on/off

Requirements

Trend analysis requires multiple scans of the same root. After your first scan, you’ll see a message prompting you to run additional scans to generate trend data.

Use Cases

  • Capacity Planning: Monitor storage growth rates
  • Change Detection: Identify unusual modification patterns
  • Validation Monitoring: Track data integrity over time
  • Baseline Comparison: See how your filesystem evolves from initial state

Explore

The Explore page provides both visual query building and free-form query capabilities for analyzing your FsPulse data.

Overview

Explore offers two ways to query your data:

  • Structured tabs (Roots, Scans, Items, Changes, Alerts) — Visual query builder with column selection, sorting, and filtering
  • Query tab — Free-form query entry using FsPulse’s query language

Structured Query Tabs

The Roots, Scans, Items, Changes, and Alerts tabs provide a visual interface for building queries without writing query syntax.

Layout

Each structured tab displays:

  • Column selector panel (left) — Configure which columns to display and how
  • Results table (right) — View query results with pagination

Column Controls

The column selector provides several controls for each available column:

ControlDescription
CheckboxShow or hide the column in results
Drag handleReorder columns by dragging
SortClick to cycle through ascending (↑), descending (↓), or no sort (⇅)
FilterAdd a filter condition for this column

Working with Columns

Show/Hide Columns: Check or uncheck the box next to any column name to include or exclude it from results.

Reorder Columns: Drag columns using the grip handle to change the display order in the results table.

Sort Results: Click the sort control to cycle through:

  • ⇅ No sort
  • ↑ Ascending
  • ↓ Descending

Only one column can be sorted at a time.

Filter Data: Click the filter button (+) to add a filter condition. Active filters display as badges showing the filter value. Click the X to remove a filter.

Reset: Click the reset button in the column header to restore all columns to their default visibility, order, and clear all filters and sorts.

Query Tab

The Query tab provides a free-form interface for writing queries using FsPulse’s SQL-inspired query language.

Features

  • Query input — Text area for entering queries
  • Execute — Run the query (or press Cmd/Ctrl + Enter)
  • Example queries — Expandable section with clickable sample queries
  • Documentation link — Quick access to the full query syntax reference
  • Results table — Paginated results display

Example Queries

The Query tab includes sample queries you can click to populate the input:

items limit 10
items where item_type:(F) show item_path, size limit 25
items where item_type:(F), size:(>1000000) show item_path, size order by size desc limit 20
alerts where alert_status:(O) show alert_type, item_path, created_at limit 15

Query Domains

Both interfaces support querying five data domains:

DomainDescription
rootsConfigured scan roots
scansScan metadata and statistics
itemsFiles and folders from the most recent scan
changesChange records across all scans
alertsIntegrity issues and validation failures

When to Use Each Interface

Use structured tabs when:

  • Exploring data without knowing the exact query syntax
  • Quickly toggling columns to find relevant information
  • Building simple filters and sorts visually

Use the Query tab when:

  • Writing complex queries with multiple conditions
  • Using advanced query features (comparisons, multiple filters with AND/OR)
  • Reproducing a specific query you’ve used before
  • Learning the query syntax with immediate feedback

Query Syntax

For complete documentation on the query language including all operators, field names, and advanced features, see Query Syntax.

Command-Line Interface

FsPulse is a web-first application. The CLI exists solely to launch the web server—all functionality including scanning, querying, browsing, and configuration is accessed through the Interface.


Starting FsPulse

To start the FsPulse server:

fspulse

Or explicitly:

fspulse serve

Both commands are equivalent. The server starts on http://127.0.0.1:8080 by default.

Once running, open your browser to access the full web interface for:

  • Managing scan roots and schedules
  • Running and monitoring scans
  • Browsing your filesystem data
  • Querying and exploring results
  • Managing alerts

Configuration

FsPulse behavior is configured through environment variables or a config file, not command-line flags.

Environment Variables

Set these before running fspulse:

# Server settings
export FSPULSE_SERVER_HOST=0.0.0.0    # Bind address (default: 127.0.0.1)
export FSPULSE_SERVER_PORT=9090       # Port number (default: 8080)

# Analysis settings
export FSPULSE_ANALYSIS_THREADS=16    # Worker threads (default: 8)

# Logging
export FSPULSE_LOGGING_FSPULSE=debug  # Log level (default: info)

# Data location
export FSPULSE_DATA_DIR=/custom/path  # Data directory override

fspulse

Configuration File

FsPulse also reads from config.toml in the data directory. See Configuration for complete documentation including:

  • All available settings
  • Environment variable reference
  • Platform-specific data directory locations
  • Docker configuration

Getting Help

View version and basic usage:

fspulse --help
fspulse --version

Query Syntax

FsPulse provides a flexible, SQL-like query language for exploring scan results. This language supports filtering, custom column selection, ordering, and limiting the number of results.


Query Structure

Each query begins with one of the four supported domains:

  • roots
  • scans
  • items
  • changes

You can then add any of the following optional clauses:

DOMAIN [WHERE ...] [SHOW ...] [ORDER BY ...] [LIMIT ...]

Column Availability

roots Domain

All queries that retrieve root information begin with the keyword roots:

roots [WHERE ...] [SHOW ...] [ORDER BY ...] [LIMIT ...]
PropertyType
root_idInteger
root_pathPath

scans Domain

All queries that retrieve scan information begin with the keyword scans:

scans [WHERE ...] [SHOW ...] [ORDER BY ...] [LIMIT ...]
PropertyTypeDescription
scan_idIntegerUnique scan identifier
root_idIntegerRoot directory identifier
schedule_idIntegerSchedule identifier (null for manual scans)
started_atDateTimestamp when scan started
ended_atDateTimestamp when scan ended (null if incomplete)
was_restartedBooleanTrue if scan was resumed after restart
scan_stateScan State EnumState of the scan
is_hashBooleanHash new or changed files
hash_allBooleanHash all items including unchanged
is_valBooleanValidate new or changed files
val_allBooleanValidate all items including unchanged
file_countIntegerCount of files found in the scan
folder_countIntegerCount of directories found in the scan
total_sizeIntegerTotal size in bytes of all files in the scan
alert_countIntegerNumber of alerts created during the scan
add_countIntegerNumber of items added in the scan
modify_countIntegerNumber of items modified in the scan
delete_countIntegerNumber of items deleted in the scan
errorStringError message if scan failed

items Domain

All queries that retrieve item information begin with the keyword items:

items [WHERE ...] [SHOW ...] [ORDER BY ...] [LIMIT ...]
PropertyType
item_idInteger
root_idInteger
item_pathPath
item_typeItem Type Enum
accessAccess Status
last_scanInteger
is_tsBoolean
mod_dateDate
sizeInteger
last_hash_scanInteger
file_hashString
last_val_scanInteger
valValidation Status
val_errorString

changes Domain

All queries that retrieve change history begin with the keyword changes:

changes [WHERE ...] [SHOW ...] [ORDER BY ...] [LIMIT ...]
PropertyType
change_idInteger
scan_idInteger
item_idInteger
change_typeChange Type Enum
access_oldAccess Status
access_newAccess Status
is_undeleteBoolean
meta_changeBoolean
mod_date_oldDate
mod_date_newDate
size_oldInteger
size_newInteger
hash_changeBoolean
last_hash_scan_oldInteger
hash_oldString
hash_newString
val_changeBoolean
last_val_scan_oldInteger
val_oldValidation Status
val_newValidation Status
val_error_oldString
val_error_newString
root_idInteger
item_pathPath

The WHERE Clause

The WHERE clause filters results using one or more filters. Each filter has the structure:

column_name:(value1, value2, ...)

Values must match the column’s type. You can use individual values, ranges (when supported), or a comma-separated combination. Values are not quoted unless explicitly shown.

TypeExamplesNotes
Integer5, 1..5, 3, 5, 7..9, null, not null, NULL, NOT NULLSupports ranges and nullability. Ranges are inclusive.
Date2024-01-01, 2024-01-01..2024-06-30, null, not null, NULL, NOT NULLUse YYYY-MM-DD. Ranges are inclusive.
Booleantrue, false, T, F, null, not null, NULL, NOT NULLValues are unquoted. Null values are allowed in all-lower or all-upper case.
String'example', 'error: missing EOF', null, NULLQuoted strings. Null values are allowed in all-lower or all-upper case.
Path'photos/reports', 'file.txt'Must be quoted. Null values are not supported.
Validation StatusV, I, N, U, null, not null, NULL, NOT NULLValid (V), Invalid (I), No Validator (N), Unknown (U). Unquoted. Ranges not supported.
Item Type EnumF, D, S, O, null, not null, NULL, NOT NULLFile (F), Directory (D), Symlink (S), Other (O). Unquoted. Ranges not supported.
Change Type EnumN, A, M, D, null, not null, NULL, NOT NULLNo Change (N), Add (A), Modify (M), Delete (D). Unquoted. Ranges not supported.
Scan State EnumS, W, A, C, P, E, null, not null, NULL, NOT NULLScanning (S), Sweeping (W), Analyzing (A), Completed (C), Stopped (P), Error (E). Unquoted. Ranges not supported.
Access StatusN, M, R, null, not null, NULL, NOT NULLNo Error (N), Meta Error (M), Read Error (R). Unquoted. Ranges not supported.

Combining Filters

When specifying multiple values within a single filter, the match is logically OR. When specifying multiple filters across different columns, the match is logically AND.

For example:

scans where started_at:(2025-01-01..2025-01-07, 2025-02-01..2025-02-07), hashing:(T)

This query matches scans that:

  • Occurred in either the first week of January 2025 or the first week of February 2025
  • AND were performed with hashing enabled

The SHOW Clause

The SHOW clause controls which columns are displayed and how some of them are formatted. If omitted, a default column set is used.

You may specify:

  • A list of column names
  • The keyword default to insert the default set
  • The keyword all to show all available columns

Formatting modifiers can be applied using the @ symbol:

item_path@name, mod_date@short

Format Specifiers by Type

TypeAllowed Format Modifiers
Datefull, short, timestamp
Pathfull, relative, short, name
Validation / Enum / Booleanfull, short
Integer / String(no formatting options)

The timestamp format modifier converts dates to UTC timestamps (seconds since Unix epoch), which is useful for programmatic processing or web applications that need to format dates in the user’s local timezone.


The ORDER BY Clause

Specifies sort order for the results:

items order by mod_date desc, item_path asc

If direction is omitted, ASC is assumed.


The LIMIT Clause

Restricts the number of rows returned:

items limit 50

Examples

# Items whose path contains 'reports'
items where item_path:('reports')

# Changes involving validation failures
changes where val_new:(I) show default, val_old, val_new order by change_id desc

# Scans with timestamp for programmatic processing
scans show scan_id, started_at@timestamp, file_count order by started_at desc limit 10

# Scans with changes and alerts
scans show scan_id, file_count, total_size, add_count, modify_count, delete_count, alert_count order by started_at desc

See also: Explore Page · Validators · Configuration

Configuration

FsPulse supports persistent, user-defined configuration through a file named config.toml. This file allows you to control logging behavior, analysis settings, server configuration, and more.

📦 Docker Users: If you’re running FsPulse in Docker, see the Docker Deployment chapter for Docker-specific configuration including environment variable overrides and volume management.


Finding config.toml

The config.toml file is stored in FsPulse’s data directory. The location depends on how you’re running FsPulse:

Docker Deployments

When running in Docker, the data directory is /data, so the config file is located at /data/config.toml inside the container. FsPulse automatically creates this file with default settings on first run.

To access it from your host machine:

# View the config
docker exec fspulse cat /data/config.toml

# Extract to edit
docker exec fspulse cat /data/config.toml > config.toml

See the Docker Deployment chapter for details on editing the config in Docker.

Native Installations

FsPulse uses the directories crate to determine the platform-specific data directory location:

PlatformData Directory LocationExample Path
Linux$XDG_DATA_HOME/fspulse or $HOME/.local/share/fspulse/home/alice/.local/share/fspulse
macOS$HOME/Library/Application Support/fspulse/Users/alice/Library/Application Support/fspulse
Windows%LOCALAPPDATA%\fspulse\dataC:\Users\Alice\AppData\Local\fspulse\data

The config file is located at <data_dir>/config.toml.

On first run, FsPulse automatically creates the data directory and writes a default config.toml if one doesn’t exist.

Tip: You can delete config.toml at any time to regenerate it with defaults. Newly introduced settings will not automatically be added to an existing file.

Override: The data directory location can be overridden using the FSPULSE_DATA_DIR environment variable. See Data Directory and Database Settings for details.


Configuration Settings

Here are the current available settings and their default values:

[logging]
fspulse = "info"
lopdf = "error"

[server]
port = 8080
host = "127.0.0.1"

[analysis]
threads = 8

Logging

FsPulse uses the Rust log crate, and so does the PDF validation crate lopdf. You can configure logging levels independently for each subsystem in the [logging] section.

Supported log levels:

  • error – only critical errors
  • warn – warnings and errors
  • info – general status messages (default for FsPulse)
  • debug – verbose output for debugging
  • trace – extremely detailed logs

Log File Behavior

  • Logs are written to <data_dir>/logs/
  • Each run of FsPulse creates a new log file, named using the current date and time
  • FsPulse retains up to 100 log files; older files are automatically deleted

Server Settings

The [server] section controls the web UI server behavior when running fspulse serve.

  • host: IP address to bind to (default: 127.0.0.1)
    • 127.0.0.1 - Localhost only (secure, only accessible from same machine)
    • 0.0.0.0 - All interfaces (required for Docker, remote access)
  • port: Port number to listen on (default: 8080)

Note: In Docker deployments, the host should be 0.0.0.0 to allow access from outside the container. The Docker image sets this automatically via environment variable.


Analysis Settings

The [analysis] section controls how many threads are used during the analysis phase of scanning (for hashing and validation).

  • threads: number of worker threads (default: 8)

You can adjust this based on your system’s CPU count or performance needs. FsPulse uses SHA-256 for file hashing to detect content changes and verify integrity.


Environment Variables

All configuration settings can be overridden using environment variables. This is particularly useful for:

  • Docker deployments where editing files is inconvenient
  • Different environments (development, staging, production) with different settings
  • NAS deployments (TrueNAS, Unraid) using web-based configuration UIs
  • CI/CD pipelines where configuration is managed externally

How It Works

Environment variables follow the pattern: FSPULSE_<SECTION>_<FIELD>

The <SECTION> corresponds to a section in config.toml (like [server], [logging], [analysis]), and <FIELD> is the setting name within that section.

Precedence (highest to lowest):

  1. Environment variables - Override everything
  2. config.toml - User-defined settings
  3. Built-in defaults - Fallback values

This allows you to set sensible defaults in config.toml and override them as needed per deployment.

Complete Variable Reference

Server Settings

Control the web UI server behavior (when running fspulse serve):

VariableDefaultValid ValuesDescription
FSPULSE_SERVER_HOST127.0.0.1IP addressBind address. Use 0.0.0.0 for Docker/remote access, 127.0.0.1 for localhost only
FSPULSE_SERVER_PORT80801-65535Web UI port number

Examples:

# Native - serve only on localhost
export FSPULSE_SERVER_HOST=127.0.0.1
export FSPULSE_SERVER_PORT=8080
fspulse serve

# Docker - must bind to all interfaces
docker run -e FSPULSE_SERVER_HOST=0.0.0.0 -e FSPULSE_SERVER_PORT=9090 -p 9090:9090 ...

Logging Settings

Configure log output verbosity:

VariableDefaultValid ValuesDescription
FSPULSE_LOGGING_FSPULSEinfoerror, warn, info, debug, traceFsPulse application log level
FSPULSE_LOGGING_LOPDFerrorerror, warn, info, debug, tracePDF library (lopdf) log level

Examples:

# Enable debug logging
export FSPULSE_LOGGING_FSPULSE=debug
export FSPULSE_LOGGING_LOPDF=error

# Docker
docker run -e FSPULSE_LOGGING_FSPULSE=debug ...

Analysis Settings

Configure scan behavior and performance:

VariableDefaultValid ValuesDescription
FSPULSE_ANALYSIS_THREADS81-256Number of worker threads for analysis phase (hashing/validation)

Examples:

# Use 16 threads for faster scanning
export FSPULSE_ANALYSIS_THREADS=16

# Docker
docker run -e FSPULSE_ANALYSIS_THREADS=16 ...

Data Directory and Database Settings

Control where FsPulse stores its data:

VariableDefaultValid ValuesDescription
FSPULSE_DATA_DIRPlatform-specificDirectory pathOverride the data directory location. Contains config, logs, and database (by default). Cannot be set in config.toml.
FSPULSE_DATABASE_DIR<data_dir>Directory pathOverride database directory only (advanced). Stores the database outside the data directory. This is a directory path, not a file path - the database file is always named fspulse.db

Data Directory:

The data directory contains configuration (config.toml), logs (logs/), and the database (fspulse.db) by default. It is determined by:

  1. FSPULSE_DATA_DIR environment variable (if set)
  2. Platform-specific project local directory (default):
    • Linux: $XDG_DATA_HOME/fspulse or $HOME/.local/share/fspulse
    • macOS: $HOME/Library/Application Support/fspulse
    • Windows: %LOCALAPPDATA%\fspulse\data
    • Docker: /data

Database Location:

By default, the database is stored in the data directory as fspulse.db. You can override this to store the database separately:

Database Directory Precedence:

  1. FSPULSE_DATABASE_DIR environment variable (if set) - highest priority
  2. config.toml [database] dir setting (if configured)
  3. Data directory (from FSPULSE_DATA_DIR or platform default)

Important Notes:

  • The database file is always named fspulse.db within the determined directory
  • Configuration and logs always remain in the data directory, even if the database is moved
  • For Docker: it’s recommended to use volume/bind mounts to /data rather than overriding FSPULSE_DATA_DIR

Docker-Specific Variables

These variables are specific to Docker deployments:

VariableDefaultValid ValuesDescription
PUID1000UID numberUser ID to run FsPulse as (for NAS permission matching)
PGID1000GID numberGroup ID to run FsPulse as (for NAS permission matching)
TZUTCTimezone stringTimezone for log timestamps and UI (e.g., America/New_York)

See Docker Deployment - NAS Deployments for details on PUID/PGID usage.

Usage Examples

Native (Linux/macOS/Windows):

# Set environment variables
export FSPULSE_SERVER_PORT=9090
export FSPULSE_LOGGING_FSPULSE=debug
export FSPULSE_ANALYSIS_THREADS=16

# Run FsPulse (uses env vars)
fspulse serve

Docker - Command Line:

docker run -d \
  --name fspulse \
  -e FSPULSE_SERVER_PORT=9090 \
  -e FSPULSE_LOGGING_FSPULSE=debug \
  -e FSPULSE_ANALYSIS_THREADS=16 \
  -p 9090:9090 \
  -v fspulse-data:/data \
  gtunesdev/fspulse:latest

Docker Compose:

services:
  fspulse:
    image: gtunesdev/fspulse:latest
    environment:
      - FSPULSE_SERVER_PORT=9090
      - FSPULSE_LOGGING_FSPULSE=debug
      - FSPULSE_ANALYSIS_THREADS=16
    ports:
      - "9090:9090"

Verifying Environment Variables

To see what environment variables FsPulse is using:

Native:

env | grep FSPULSE_

Docker:

docker exec fspulse env | grep FSPULSE_

Docker Configuration

When running FsPulse in Docker, configuration is managed slightly differently. The config file lives at /data/config.toml inside the container, and you have several options for customizing settings.

For step-by-step instructions on configuring FsPulse in Docker, including editing config files and using environment variables, see the Docker Deployment - Configuration section.


New Settings and Restoring Defaults

FsPulse may expand its configuration options over time. When new settings are introduced, they won’t automatically appear in your existing config.toml. To take advantage of new options, either:

  • Manually add new settings to your config file
  • Delete the file to allow FsPulse to regenerate it with all current defaults

Validators

FsPulse can optionally validate file contents during the analysis phase of a scan. To enable validation, configure it in the web UI when setting up or initiating a scan.

Validation allows FsPulse to go beyond basic metadata inspection and attempt to decode the file’s contents using format-specific logic. This helps detect corruption or formatting issues in supported file types.


Validation Status Codes

Each item in the database has an associated validation status:

Status CodeMeaning
UUnknown — item has never been included in a validation scan
VValid — most recent validation scan found no issues
IInvalid — validation failed; see validation_error field
NNo Validator — FsPulse does not currently support this file type

The validation_error field contains the error message returned by the validator only if the item was marked invalid. This field is empty for valid items or items with no validator.

Note: Some validation “errors” surfaced by the underlying libraries may not indicate corruption, but rather unsupported edge cases or metadata formatting. Always review the error messages before assuming a file is damaged.


Supported Validators

FsPulse relies on external Rust crates for performing format-specific validation. We gratefully acknowledge the work of the developers behind these crates for making them available to the Rust community.

File TypesCrateLink
FLAC audio (.flac)claxonclaxon on GitHub
Images (.jpg, .jpeg, .png, .gif, .tiff, .bmp)imageimage on GitHub
PDF documents (.pdf)lopdflopdf on GitHub

Validation support may expand in future versions of FsPulse to cover additional file types such as ZIP archives, audio metadata, or XML/JSON files.


$1 See the Query Syntax page for full details on query clauses and supported filters.

Advanced Topics

This section covers advanced concepts and technical details about FsPulse’s internal operation.

Contents

These topics are useful for developers, contributors, or users who want to understand FsPulse’s architecture more deeply.

Concepts

FsPulse is centered around tracking and understanding the state of the file system over time. The core entities in FsPulse — roots, scans, items, and changes — represent a layered model of this information.


Scans

Scans are the units of work performed by FsPulse. A scan is performed on a file system tree specified by a path. A scan deeply traverses the specified path and its children, recording information on the files and directories discovered. The details of scanning are explained in Scanning.


Root

A root is the starting point for a scan. It represents a specific path on the file system that you explicitly tell FsPulse to track.

Each root is stored persistently in the database, and every scan you perform refers back to a root.

  • Paths are stored as absolute paths.
  • Each root has a unique ID.
  • You can scan a root multiple times over time.

Scan

A scan is a snapshot of a root directory at a specific point in time.

Each scan records metadata about:

  • The time the scan was performed
  • Whether hashing and validation were enabled
  • The collection of items (files and folders) found during the scan

Scans are always tied to a root via root_id, and are ordered chronologically by started_at.


Item

An item represents a single file or folder discovered during a scan.

Each item includes metadata such as:

  • Path
  • Whether it’s a file or directory
  • Last modified date
  • Size
  • Optional hash and validation info

Items are created when newly seen, and marked with a tombstone (is_ts = true) if they were present in previous scans but no longer exist.


Change

A change represents a detected difference in an item between the current scan and a previous one.

Changes may reflect:

  • File additions
  • File deletions
  • Metadata or content modifications

Each change is associated with both the scan and the item it affects.


Entity Flow

A simplified representation of how the entities relate:

Root
 └── Scan (per run)
      └── Item (files and folders)
           └── Change (if the item changed)

These concepts form the foundation of FsPulse’s scan and query capabilities. Understanding them will help you make the most of both interactive and command-line modes.

Database

FsPulse uses an embedded SQLite database to store all scan-related data. The database schema mirrors the core domain concepts used in FsPulse: roots, scans, items, and changes.


Database Name and Location

The database file is always named:

fspulse.db

Data Directory

FsPulse uses a data directory to store application data including configuration, logs, and (by default) the database. The data directory location is determined by:

  1. FSPULSE_DATA_DIR environment variable (if set) - overrides the default location
  2. Platform-specific default - uses the directories crate’s project local directory:
PlatformValueExample
Linux$XDG_DATA_HOME/fspulse or $HOME/.local/share/fspulse/home/alice/.local/share/fspulse
macOS$HOME/Library/Application Support/fspulse/Users/Alice/Library/Application Support/fspulse
Windows{FOLDERID_LocalAppData}\fspulse\dataC:\Users\Alice\AppData\Local\fspulse\data
Docker/data/data

What’s stored in the data directory:

  • Configuration file (config.toml)
  • Log files (logs/)
  • Database file (fspulse.db) - by default

Note for Docker users: The data directory defaults to /data and can be overridden with FSPULSE_DATA_DIR, but this is generally not recommended since you can map any host directory or Docker volume to /data instead.

Default Database Location

By default, the database is stored in the data directory:

<data_dir>/fspulse.db

For example:

/home/alice/.local/share/fspulse/fspulse.db

Custom Database Location

If you need to store the database outside the data directory (for example, on a different volume or network share), you can override the database directory specifically:

Environment Variable:

export FSPULSE_DATABASE_DIR=/path/to/custom/directory
fspulse serve

Config File (config.toml):

[database]
dir = "/path/to/custom/directory"

In both cases, FsPulse will store the database as fspulse.db inside the specified directory. The filename cannot be changed — only the directory is configurable.

Database Location Precedence:

  1. FSPULSE_DATABASE_DIR environment variable (highest priority)
  2. [database].dir in config.toml
  3. Data directory (from FSPULSE_DATA_DIR or platform default)

Important: Configuration and logs always remain in the data directory, even when the database is moved to a custom location.

See the Configuration - Database Settings section for more details.


Schema Overview

The database schema is implemented using Rust and reflects the same logical structure used by the query interface:

  • roots — scanned root directories
  • scans — individual scan snapshots
  • items — discovered files and folders with metadata
  • changes — additions, deletions, and modifications between scans

The schema is versioned to allow future upgrades without requiring a full reset.


Exploring the Database

Because FsPulse uses SQLite, you can inspect the database using any compatible tool, such as:

  • DB Browser for SQLite
  • The sqlite3 command-line tool
  • SQLite integrations in many IDEs and database browsers

⚠️ Caution: Making manual changes to the database may affect FsPulse’s behavior or stability. Read-only access is recommended.


FsPulse manages all internal data access automatically. Most users will not need to interact with the database directly.

Development

FsPulse is under active development, with regular improvements being made to both its functionality and documentation.

Contribution Policy

At this time, FsPulse is not open for public contribution. This may change in the future as the project matures and its architecture stabilizes.

If you’re interested in the project, you’re encouraged to:

  • Explore the source code on GitHub
  • Open GitHub issues to report bugs or request features
  • Follow the project for updates

Your interest and feedback are appreciated.


If contribution opportunities open in the future, setup instructions and contribution guidelines will be added to this page.

License

FsPulse is released under the MIT License.

You are free to use, modify, and distribute the software under the terms of this license.

Full License Text

MIT License

Copyright (c) 2025 gtunes-dev

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

See the full LICENSE file in the repository.

Acknowledgements

FsPulse relies on several open source Rust crates. We gratefully acknowledge the work of these maintainers, particularly for enabling file format validation.

File Format Validation

The following libraries enable FsPulse’s ability to detect corrupted files:

  • claxon — FLAC audio decoding and validation
  • image — Image format decoding for JPG, PNG, GIF, TIFF, BMP
  • lopdf — PDF parsing and validation

See Validators for the complete list of supported file types.

Additional Dependencies

FsPulse wouldn’t be possible without the incredible open source ecosystem it’s built upon:

Web Interface:

  • shadcn/ui — Beautiful, accessible component library
  • Radix UI — Unstyled, accessible UI primitives
  • Tailwind CSS — Utility-first CSS framework
  • Lucide — Clean, consistent icon set
  • React — UI framework

Backend & CLI:

  • rusqlite — SQLite database interface
  • axum — Web framework
  • tokio — Async runtime
  • clap — Command-line argument parsing
  • dialoguer — Interactive CLI prompts
  • ratatui — Terminal UI framework

The complete list of dependencies is available in the project’s Cargo.toml and package.json.


Thank you to all the open source maintainers whose work makes FsPulse possible.