fsPulse
Introduction
Read-Only Guarantee. fsPulse never modifies your files. It requires only read access to the directories you configure for scanning. Write access is required only for fsPulse’s own database, configuration files, and logs — never for your data.
Local-Only Guarantee. fsPulse makes no outbound network requests. All functionality runs entirely on your local system, with no external dependencies or telemetry.
What is fsPulse?
fsPulse is a comprehensive filesystem monitoring and integrity tool that gives you complete visibility into your critical directories. Track your data as it grows and changes over time, detect unexpected modifications, and catch silent threats like bit rot and corruption before they become disasters. fsPulse provides continuous awareness through automated scanning, historical trend analysis, and intelligent alerting.
Your filesystem is constantly evolving—files are added, modified, and deleted. Storage grows. But invisible problems hide beneath the surface: bit rot silently corrupts data, ransomware alters files while preserving timestamps, and you don’t realize directories have bloated.
fsPulse gives you continuous awareness of both the visible and invisible:
Monitor Change & Growth:
- Track directory sizes and growth trends over time
- Visualize file additions, modifications, and deletions
- Understand what’s changing and when across all scans
Detect Integrity Issues:
- Content Hashing (SHA2): Catches when file contents change even though metadata stays the same—the signature of bit rot or tampering
- Format Validation: Reads and validates file structures to detect corruption in FLAC, JPEG, PNG, PDF, and more
Whether you’re managing storage capacity, tracking project evolution, or ensuring data integrity, fsPulse provides the visibility and peace of mind that comes from truly knowing the state of your data.
Key Capabilities
- Health-at-a-Glance Overview — See the status of all monitored directories immediately: open alerts, last scan times, and overall health
- Continuous Monitoring — Schedule recurring scans (daily, weekly, monthly, or custom intervals) to track your filesystem automatically
- Temporal Versioning — Every item’s state is tracked over time; browse your filesystem as it appeared at any past scan
- Size & Growth Tracking — Monitor directory sizes and visualize storage trends over time with dual-format units
- Change Detection — Track all file additions, modifications, and deletions through version history
- Integrity Verification — SHA2 hashing detects bit rot and tampering; format validators catch corruption in supported file types
- Historical Trends — Interactive trend charts show how your data evolves: sizes, counts, changes, and alerts
- Alert System — Suspect hash changes and validation failures flagged immediately with status management
- Powerful Query Language — SQL-inspired syntax for filtering, sorting, and analyzing across five data domains
- Web-First Design — Elegant web UI for all operations including scanning, browsing, querying, and configuration
Running fsPulse
fsPulse is a web-first application. Start the server and access all functionality through your browser:
fspulse
Then open http://localhost:8080 in your browser to access the web interface.
The web UI provides complete functionality for managing roots, scheduling and monitoring scans, browsing your filesystem data, running queries, and managing alerts. Configuration is done through environment variables or a config file—see Configuration for details.
fsPulse is designed to scale across large file systems while maintaining clarity and control for the user.
This book provides comprehensive documentation for all aspects of fsPulse. Start with Getting Started for installation, or jump to any section that interests you.
Getting Started
fsPulse can be installed in one of four ways:
- Run with Docker (Recommended)
- Install via crates.io
- Clone and build from source
- Download a pre-built release binary from GitHub
Choose the method that works best for your platform and preferences.
1. Run with Docker (Recommended)
The easiest way to run fsPulse is with Docker:
docker pull gtunesdev/fspulse:latest
docker run -d \
--name fspulse \
-p 8080:8080 \
-v fspulse-data:/data \
gtunesdev/fspulse:latest
Access the web UI at http://localhost:8080
The web UI provides full functionality: managing roots, initiating scans, querying data, and viewing results—all from your browser.
See the Docker Deployment chapter for complete documentation including:
- Volume management for scanning host directories
- Configuration options
- Docker Compose examples
- NAS deployment (TrueNAS, Unraid)
- Troubleshooting
2. Install via Crates.io
The easiest way to get fsPulse is via crates.io:
cargo install fspulse
This will download, compile, and install the latest version of fsPulse into Cargo’s bin directory, typically ~/.cargo/bin. That directory is usually already in your PATH. If it’s not, you may need to add it manually.
Then run:
fspulse --help
To upgrade to the latest version later:
cargo install fspulse --force
3. Clone and Build from Source
If you prefer working directly with the source code (for example, to contribute or try out development versions):
git clone https://github.com/gtunes-dev/fspulse.git
cd fspulse
cargo build --release
Then run it from the release build directory:
./target/release/fspulse --help
4. Download Pre-Built Release Binaries
Pre-built release binaries for Linux, macOS, and Windows are available on the GitHub Releases page:
- Visit the releases page.
- Download the appropriate archive for your operating system.
- Unpack the archive.
- Optionally move the
fspulsebinary to a directory included in yourPATH.
For example, on Unix systems:
mv fspulse /usr/local/bin/
Then confirm it’s working:
fspulse --help
Running fsPulse
After installation, start the fsPulse server:
fspulse
Or explicitly:
fspulse serve
Then open your browser to http://localhost:8080 to access the web interface.
fsPulse is a web-first application. All functionality is available through the web UI:
- Health overview with root status and alert counts
- Root and schedule management
- Interactive browsing with tree, folder, and search views
- Point-in-time filesystem snapshots and comparison
- Integrity filtering by hash state and validation state
- Powerful query interface across five data domains
- Alert management and configuration
Configuration
fsPulse is configured through environment variables or a config file, not command-line flags:
# Example: Change port and enable debug logging
export FSPULSE_SERVER_PORT=9090
export FSPULSE_LOGGING_FSPULSE=debug
fspulse
See Configuration for all available settings and the Command-Line Interface page for more details.
Installation
fsPulse can be installed in several ways depending on your preferences and environment.
Docker Hub (Recommended)
Pull the official image and run:
docker pull gtunesdev/fspulse:latest
docker run -d --name fspulse -p 8080:8080 -v fspulse-data:/data gtunesdev/fspulse:latest
Multi-architecture support: linux/amd64, linux/arm64
See Docker Deployment for complete instructions.
Cargo (crates.io)
Install via Rust’s package manager:
cargo install fspulse
Requires Rust toolchain installed on your system.
Pre-built Binaries
Download platform-specific binaries from GitHub Releases.
Available for: Linux, macOS, Windows
macOS builds include both Intel (x86_64) and Apple Silicon (ARM64) binaries.
Note: All web UI assets are embedded in the binary—no external files or dependencies required.
Next Steps
- Want to build from source? See Building from Source
- Ready to start using fsPulse? Proceed to First Steps
Building from Source
This guide covers building fsPulse from source code, which is useful for development, customization, or running on platforms without pre-built binaries.
Prerequisites
Before building fsPulse, ensure you have the following installed:
Required Tools
-
Rust (latest stable version)
- Install via rustup
- Verify:
cargo --version
-
Node.js (v18 or later) with npm
- Install from nodejs.org
- Verify:
node --versionandnpm --version
Platform-Specific Requirements
Windows:
- Visual Studio Build Tools or Visual Studio with C++ development tools
- Required for SQLite compilation
Linux:
- Build essentials:
build-essential(Ubuntu/Debian) orbase-devel(Arch) - May need
pkg-configandlibsqlite3-devdepending on distribution
macOS:
- Xcode Command Line Tools:
xcode-select --install
Quick Build (Recommended)
The easiest way to build fsPulse is using the provided build script:
git clone https://github.com/gtunes-dev/fspulse.git
cd fspulse
./scripts/build.sh
The script will:
- Check for required tools
- Install frontend dependencies
- Build the React frontend
- Compile the Rust binary with embedded assets
The resulting binary will be at: ./target/release/fspulse
Manual Build
If you prefer to run each step manually or need more control:
Step 1: Clone the Repository
git clone https://github.com/gtunes-dev/fspulse.git
cd fspulse
Step 2: Build the Frontend
cd frontend
npm install
npm run build
cd ..
This creates the frontend/dist/ directory containing the compiled React application.
Step 3: Build the Rust Binary
cargo build --release
The binary will be at: ./target/release/fspulse
Important: The frontend must be built before compiling the Rust binary. The web UI assets are embedded into the binary at compile time via RustEmbed. If
frontend/dist/doesn’t exist, the build will fail with a helpful error message.
Development Builds
For development, you can skip the release optimization:
# Frontend (development mode with hot reload)
cd frontend
npm run dev
# Backend (in a separate terminal)
cargo run -- serve
In development mode, the backend serves frontend files directly from frontend/dist/ rather than using embedded assets, allowing for faster iteration.
Troubleshooting
“Frontend assets not found”
Error:
❌ ERROR: Frontend assets not found at 'frontend/dist/'
Solution: Build the frontend first:
cd frontend
npm install
npm run build
cd ..
Windows: “link.exe not found”
Error: Missing Visual Studio Build Tools
Solution: Install Visual Studio Build Tools with C++ development tools from visualstudio.microsoft.com
Linux: “cannot find -lsqlite3”
Error: Missing SQLite development libraries
Solution: Install platform-specific package:
- Ubuntu/Debian:
sudo apt-get install libsqlite3-dev - Fedora:
sudo dnf install sqlite-devel - Arch:
sudo pacman -S sqlite
npm install fails
Error: Network or permission issues with npm
Solution:
- Clear npm cache:
npm cache clean --force - Check Node.js version:
node --version(should be v18+) - Try with sudo (not recommended) or fix npm permissions
Running Your Build
After building, run fsPulse:
./target/release/fspulse --help
./target/release/fspulse serve
Access the web UI at: http://localhost:8080
Next Steps
- First Steps - Configure and start using fsPulse
- Configuration - Customize fsPulse behavior
- Development - Contributing to fsPulse
Docker Deployment
The easiest way to run fsPulse is with Docker. The container runs fsPulse as a background service with the web UI accessible on port 8080. You can manage roots, initiate scans, query data, and view results—all from your browser.
Quick Start
Get fsPulse running in three simple steps:
# 1. Pull the image
docker pull gtunesdev/fspulse:latest
# 2. Run the container
docker run -d \
--name fspulse \
-p 8080:8080 \
-v fspulse-data:/data \
gtunesdev/fspulse:latest
# 3. Access the web UI
open http://localhost:8080
That’s it! The web UI is now running.
This basic setup stores all fsPulse data (database, config, logs) in a Docker volume and uses default settings. If you need to customize settings (like running as a specific user for NAS deployments, or changing the port), see the Configuration and NAS Deployments sections below.
Scanning Your Files
To scan directories on your host machine, you need to mount them into the container. fsPulse can then scan these mounted paths.
Mounting Directories
Add -v flags to mount host directories into the container. We recommend mounting them under /roots for clarity:
docker run -d \
--name fspulse \
-p 8080:8080 \
-v fspulse-data:/data \
-v ~/Documents:/roots/documents:ro \
-v ~/Photos:/roots/photos:ro \
gtunesdev/fspulse:latest
The :ro (read-only) flag is recommended for safety—fsPulse only reads files during scans and never modifies them.
Creating Roots in the Web UI
After mounting directories:
- Open http://localhost:8080 in your browser
- Navigate to Roots in the sidebar
- Click Add Root
- Enter the container path:
/roots/documents(not the host path~/Documents) - Click Create Root
Important: Always use the container path (e.g., /roots/documents), not the host path. The container doesn’t know about host paths.
Once roots are created, you can scan them from the web UI and monitor progress in real-time.
Docker Compose (Recommended)
For persistent deployments, Docker Compose is cleaner and easier to manage:
version: '3.8'
services:
fspulse:
image: gtunesdev/fspulse:latest
container_name: fspulse
restart: unless-stopped
ports:
- "8080:8080"
volumes:
# Persistent data storage - REQUIRED
# Must map /data to either a Docker volume (shown here) or a host path
# Must support read/write access for database, config, and logs
- fspulse-data:/data
# Alternative: use a host path instead
# - /path/on/host/fspulse-data:/data
# Directories to scan (read-only recommended for safety)
- ~/Documents:/roots/documents:ro
- ~/Photos:/roots/photos:ro
environment:
# Optional: override any configuration setting
# See Configuration section below and https://gtunes-dev.github.io/fspulse/configuration.html
- TZ=America/New_York
volumes:
fspulse-data:
Save as docker-compose.yml and run:
docker-compose up -d
Configuration
fsPulse creates a default config.toml on first run with sensible defaults. Most users won’t need to change anything, but when you do, there are three ways to customize settings.
Option 1: Use Environment Variables (Easiest)
Override any setting using environment variables. This works with both docker run and Docker Compose.
Docker Compose example:
services:
fspulse:
image: gtunesdev/fspulse:latest
environment:
- FSPULSE_SERVER_PORT=9090 # Change web UI port
- FSPULSE_LOGGING_FSPULSE=debug # Enable debug logging
- FSPULSE_ANALYSIS_THREADS=16 # Use 16 analysis threads
ports:
- "9090:9090"
Command line example (equivalent to above):
docker run -d \
--name fspulse \
-p 9090:9090 \
-e FSPULSE_SERVER_PORT=9090 \
-e FSPULSE_LOGGING_FSPULSE=debug \
-e FSPULSE_ANALYSIS_THREADS=16 \
-v fspulse-data:/data \
gtunesdev/fspulse:latest
Environment variables follow the pattern FSPULSE_<SECTION>_<FIELD> and override any settings in config.toml. See the Configuration chapter for a complete list of available variables and their purposes.
Option 2: Edit the Config File
If you prefer editing the config file directly:
-
Extract the auto-generated config:
docker exec fspulse cat /data/config.toml > config.toml -
Edit
config.tomlwith your preferred settings -
Copy back and restart:
docker cp config.toml fspulse:/data/config.toml docker restart fspulse
Option 3: Pre-Mount Your Own Config (Advanced)
If you want custom settings before first launch, create your own config.toml and mount it:
volumes:
- fspulse-data:/data
- ./my-config.toml:/data/config.toml:ro
Most users should start with Option 1 (environment variables) or Option 2 (edit after first run).
NAS Deployments (TrueNAS, Unraid)
NAS systems often have specific user IDs for file ownership. By default, fsPulse runs as user 1000, but you may need it to match your file ownership.
Setting User and Group IDs
Use PUID and PGID environment variables to run fsPulse as a specific user:
TrueNAS Example (apps user = UID 34):
docker run -d \
--name fspulse \
-p 8080:8080 \
-e PUID=34 \
-e PGID=34 \
-e TZ=America/New_York \
-v /mnt/pool/fspulse/data:/data \
-v /mnt/pool/documents:/roots/docs:ro \
gtunesdev/fspulse:latest
Unraid Example (custom UID 1001):
docker run -d \
--name fspulse \
-p 8080:8080 \
-e PUID=1001 \
-e PGID=100 \
-v /mnt/user/appdata/fspulse:/data \
-v /mnt/user/photos:/roots/photos:ro \
gtunesdev/fspulse:latest
Why PUID/PGID Matters
Even though you mount directories as read-only (:ro), Linux permissions still apply. If your files are owned by UID 34 and aren’t world-readable, fsPulse (running as UID 1000 by default) won’t be able to scan them. Setting PUID=34 makes fsPulse run as the same user that owns the files.
When to use PUID/PGID:
- Files have restrictive permissions (not world-readable)
- Using NAS with specific user accounts (TrueNAS, Unraid, Synology)
- You need the
/datadirectory to match specific host ownership
Advanced Topics
Custom Network Settings
If you’re using macvlan or host networking, ensure the server binds to all interfaces:
services:
fspulse:
image: gtunesdev/fspulse:latest
network_mode: host
environment:
- FSPULSE_SERVER_HOST=0.0.0.0 # Required for non-bridge networking
- FSPULSE_SERVER_PORT=8080
Note: The Docker image sets FSPULSE_SERVER_HOST=0.0.0.0 by default, so this is only needed if your config.toml overrides it to 127.0.0.1.
Reverse Proxy Setup
For public access with authentication, use a reverse proxy like nginx:
server {
listen 80;
server_name fspulse.example.com;
location / {
proxy_pass http://localhost:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
# WebSocket support for scan progress
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
Using Bind Mounts Instead of Volumes
By default, we use Docker volumes (-v fspulse-data:/data) which Docker manages automatically. For NAS deployments, you might prefer bind mounts to integrate with your existing backup schemes:
# Create directory on host
mkdir -p /mnt/pool/fspulse/data
# Use bind mount
docker run -d \
--name fspulse \
-p 8080:8080 \
-v /mnt/pool/fspulse/data:/data \ # Bind mount to host path
gtunesdev/fspulse:latest
Benefits of bind mounts for NAS:
- Included in your NAS snapshot schedule
- Backed up with your existing backup system
- Directly accessible for manual inspection
Trade-off: You need to manage permissions yourself (use PUID/PGID if needed).
Troubleshooting
Cannot Access Web UI
Problem: http://localhost:8080 doesn’t respond
Solutions:
-
Check the container is running:
docker ps | grep fspulse -
Check logs for errors:
docker logs fspulseLook for “Server started” message.
-
Verify port mapping:
docker port fspulseShould show
8080/tcp -> 0.0.0.0:8080
Permission Denied Errors
Problem: “Permission denied” when scanning or accessing /data
Solutions:
-
Check file ownership:
ls -ln /path/to/your/files -
Set PUID/PGID to match file owner:
docker run -e PUID=1000 -e PGID=1000 ... -
For bind mounts, ensure host directory is writable:
chown -R 1000:1000 /mnt/pool/fspulse/data
Configuration Changes Don’t Persist
Problem: Settings revert after container restart
Solution: Verify /data volume is mounted:
docker inspect fspulse | grep -A 10 Mounts
If missing, recreate container with volume:
docker stop fspulse
docker rm fspulse
docker run -d --name fspulse -v fspulse-data:/data ...
Database Locked Errors
Problem: “Database is locked” errors
Cause: Multiple containers accessing the same database
Solution: Only run one fsPulse container per database. Don’t mount the same /data volume to multiple containers.
Data Backup
Backing Up Your Data
For Docker volumes:
# Stop container
docker stop fspulse
# Backup volume
docker run --rm \
-v fspulse-data:/data \
-v $(pwd):/backup \
alpine tar czf /backup/fspulse-backup.tar.gz /data
# Restart container
docker start fspulse
For bind mounts:
# Simply backup the host directory
tar czf fspulse-backup.tar.gz /mnt/pool/fspulse/data
Restoring from Backup
For Docker volumes:
# Create volume
docker volume create fspulse-data-restored
# Restore data
docker run --rm \
-v fspulse-data-restored:/data \
-v $(pwd):/backup \
alpine sh -c "cd / && tar xzf /backup/fspulse-backup.tar.gz"
# Use restored volume
docker run -d --name fspulse -v fspulse-data-restored:/data ...
For bind mounts:
tar xzf fspulse-backup.tar.gz -C /mnt/pool/fspulse/data
Image Tags and Updates
fsPulse provides multiple tags for different update strategies:
| Tag | Description | When to Use |
|---|---|---|
latest | Latest stable release | Production (pinned versions) |
1.2.3 | Specific version | Production (exact control) |
1.2 | Latest patch of minor version | Production (auto-patch updates) |
main | Development builds | Testing new features |
Recommendation: Use specific version tags (1.2.3) or minor version tags (1.2) for production. Avoid latest in production to prevent unexpected updates.
Updating to a new version:
docker pull gtunesdev/fspulse:1.2.3
docker stop fspulse
docker rm fspulse
docker run -d --name fspulse -v fspulse-data:/data ... gtunesdev/fspulse:1.2.3
Your data persists in the volume across updates.
Platform Support
fsPulse images support multiple architectures—Docker automatically pulls the correct one for your platform:
- linux/amd64 - Intel/AMD processors (most common)
- linux/arm64 - ARM processors (Apple Silicon, Raspberry Pi 4, ARM servers)
Next Steps
- Explore the Configuration reference for all available settings
- Learn about Query Syntax for advanced data filtering
- Read Scanning to understand how scans work
Getting Help
- Issues: GitHub Issues
- Docker Hub: gtunesdev/fspulse
- Documentation: fsPulse Book
First Steps
This guide walks you through your first scan and basic usage of fsPulse.
Starting the Web Interface
Launch fsPulse:
fspulse
Open your browser to http://127.0.0.1:8080
You’ll see the Home page. On first launch with no roots configured, it will display a welcome message prompting you to add your first root.
Adding Your First Scan Root
- Navigate to the Roots page (in the utility section of the sidebar)
- Click Add Root
- Enter the path to the directory you want to monitor
- Save
Running Your First Scan
- From the Roots page, click Scan Now for your newly added root
- Watch the real-time progress on the Home page — you’ll see live statistics as fsPulse scans, sweeps for deleted items, and analyzes files
- Once complete, explore the results
Exploring Your Data
After your first scan completes:
- Home — See the health status of your root, including any alerts generated
- Browse — Navigate your filesystem hierarchy with tree, folder, or search views. Open the detail panel to inspect any file’s metadata, size history, version history, and alerts.
- Alerts — Check for any integrity issues detected (suspect hash changes, validation failures, access errors)
- Trends — View charts and trends (requires multiple scans to generate meaningful data)
- Data Explorer — Run queries against your scan data using the visual builder or free-form query language
Setting Up Scheduled Scans
- Navigate to the Schedules page
- Click Add Schedule
- Select your root and configure:
- Schedule type (daily, weekly, monthly, interval)
- Time/day settings
- Scan options (hashing, validation)
- Save the schedule
Scheduled scans will run automatically based on your configuration. You can see upcoming tasks on the Home page and the full activity log on the History page.
Next Steps
- Learn about Scanning Concepts — how hashing and validation work
- Explore the Interface features in detail
- Understand the Query Syntax for advanced analysis
Scanning
fsPulse scans are at the core of how it tracks changes to the filesystem over time. A scan creates a snapshot of a root directory and detects changes compared to previous scans. This page explains how to initiate scans, how incomplete scans are handled, and the phases involved in each scan.
Initiating a Scan
fsPulse runs as a web service, and scans are initiated through the web UI:
- Start the server:
fspulse serve - Open http://localhost:8080 in your browser (or the custom port you’ve configured)
- Navigate to the Roots page to add directories to monitor
- Start a manual scan or let a schedule trigger one
- Monitor real-time progress on the Home page
The web UI supports both scheduled automatic scans and manual on-demand scans. You can create recurring schedules on the Schedules page (daily, weekly, monthly, or custom intervals) or initiate individual scans from the Roots page as needed.
Once a scan on a root has begun, it must complete or be explicitly stopped before another scan on the same root can be started. Scans on different roots can run independently.
Hashing
Hashing is a key capability of fsPulse.
fsPulse uses the standard SHA2 (256) message-digest algorithm to compute digital fingerprints of file contents. The intent of hashing is to enable the detection of changes to file content in cases where the modification date and file size have not changed. One example of a case where this might occur is bit rot (data decay).
When configuring a scan in the web UI, you can enable hashing with these options:
- Hash changed items (default): Compute hashes for items that have never been hashed or whose file size or modification date has changed
- Hash all items: Hash all files, including those that have been previously hashed
If a hash is detected to have changed without a corresponding metadata change (modification date or size), the file’s hash_state is set to Suspect and an alert is generated (see Alerts). If metadata did change, the hash change is considered legitimate and hash_state remains Baseline.
Hash States
Hash state tracks the integrity of a file’s content hash over time. Hash states are stored in the database as:
- U: Unknown. No hash has been computed for this file version
- B: Baseline. The first hash computed for a version, establishing the reference point
- S: Suspect. Hash changed without a corresponding metadata change
The first hash computed for any item version is always Baseline — it establishes the reference against which future hashes are compared. If a subsequent hash differs without a metadata change, it is marked Suspect. When an item’s metadata changes, a new version is created and the next hash establishes a new Baseline.
Finding Hash Changes
You can investigate hash changes through the web UI:
- Alerts Page: Shows suspect hash changes where file metadata hasn’t changed
- Browse Page: Click any item to see its version history including hash changes
- Data Explorer: Use the Query tab to run custom queries
Example query to find items with suspect hashes (run on the Data Explorer’s Query tab):
alerts where alert_type:(H) order by created_at desc
Validating
fsPulse can attempt to assess the “validity” of files.
fsPulse uses community-contributed libraries to “validate” files. Validation is implemented as opening and reading or traversing the file. These community libraries raise a variety of “errors” when invalid content is encountered.
fsPulse’s ability to validate files is limited to the capabilities of the libraries that it uses, and these libraries vary in terms of completeness and accuracy. In some cases, such as fsPulse’s use of lopdf to validate PDF files, false positive “errors” may be detected as a consequence of lopdf encountering PDF file contents it does not yet understand. Despite these limitations, fsPulse offers a unique and effective view into potential validity issues in files.
See Validators for the complete list of supported file types.
When configuring a scan in the web UI, you can enable validation with these options:
- Validate changed items (default): Validate files that have never been validated or have changed in terms of modification date or size
- Validate all items: Validate all files regardless of previous validation status
Validation States
Validation applies only to files — folders do not have a validation state. Validation states are stored in the database as:
- U: Unknown. No validation has been performed
- N: No Validator. No validator exists for this file type
- V: Valid. Validation was performed and no errors were encountered
- I: Invalid. Validation was performed and an error was encountered
In the case of ‘I’ (Invalid), the validation error message is stored alongside the validation state. When an item’s validation state changes, a new item version is created capturing both the old and new states.
If a validation pass produces an error identical to the existing error, no new version is created — only the last_val_scan bookkeeping field is updated.
Finding Validation Issues
Invalid items are automatically flagged as alerts. You can investigate validation failures through the web UI:
- Alerts Page: Shows all items with validation failures, with filtering and status management
- Browse Page: Click any item to see its validation status and error details in the inline detail panel
- Data Explorer: Use the Query tab to run custom queries
Example query to find items currently in an invalid validation state:
items where val_state:(I) show default, val_error order by item_path
Additional queries can filter on specific validation states. See Query Syntax for details.
In-Progress Scans
fsPulse is designed to be resilient to interruptions like system crashes or power loss. If a scan stops before completing, fsPulse saves its state so it can be resumed later.
When you attempt to start a new scan on a root that has an in-progress scan, the web UI will prompt you to:
- Resume the scan from where it left off
- Stop the scan and discard its partial results
Stopping a scan reverts the database to its pre-scan state using an undo log. All detected versions, computed hashes, and validations from that partial scan will be discarded.
Phases of a Scan
Each scan proceeds in three main phases:
1. Scanning
The directory tree is deeply traversed. For each file or folder encountered:
- If not seen before:
- A new item identity is created
- A new item version is inserted with
first_scan_id = current_scan
- If seen before:
- fsPulse compares current filesystem metadata:
- Modification date (files and folders)
- File size (files only)
- If metadata differs, a new item version is created carrying forward unchanged properties (hash, validation) from the previous version
- If unchanged, the existing version’s
last_scan_idis updated in place
- fsPulse compares current filesystem metadata:
- If the path matches a deleted item (previous version has
is_deleted = true):- A new version is created with
is_deleted = false(rehydration)
- A new version is created with
Files and folders are treated as distinct types. A single path that appears as both a file and folder at different times results in two separate items.
Writes are batched (100 items per transaction) for performance. An undo log records in-place updates to support rollback if the scan is stopped.
2. Sweeping
fsPulse identifies items not seen during the current scan:
- Any item whose current version is not deleted and was not visited in this scan gets a new version with
is_deleted = true.
Moved files appear as deletes and adds, as fsPulse does not track move operations.
3. Analyzing
This phase runs only if hashing and/or validation is enabled (see Hashing and Validating above).
- Hashing — Computes a SHA-256 hash of file contents
- Validation — Uses file-type-specific validators to check content integrity (see Validators)
If a hash or validation result changes:
- If the item already received a new version in the scanning phase (same scan), the existing version is updated in place
- Otherwise, the previous version’s
last_scan_idis restored and a new version is created
This guarantees at most one new version per item per scan.
If the hash and validation results are unchanged, only the bookkeeping fields (last_hash_scan, last_val_scan) are updated on the existing version.
Performance and Threading
The analysis phase runs in parallel:
- Default: 8 threads
- Configurable from 1 to 24 in Configuration
Summary of Phases
| Phase | Purpose |
|---|---|
| Scanning | Traverses the filesystem, creates or updates item versions |
| Sweeping | Marks missing items as deleted with new version rows |
| Analyzing | Computes hashes and validates files, updating or creating versions |
Each scan provides a consistent view of the filesystem at a moment in time. The temporal versioning model means you can reconstruct the exact state of any item at any scan point.
Interface
The fsPulse interface provides a comprehensive visual environment for monitoring your filesystems, managing scans, analyzing trends, and investigating issues.
Overview
Access the interface by running:
fspulse
By default, the interface is available at http://127.0.0.1:8080. You can customize the host and port through configuration or environment variables (see Configuration).
Navigation
The left sidebar organizes pages into two groups:
Primary — the pages you’ll use most often:
- Home — Health overview showing root status, active tasks, and recent activity
- Browse — Navigate filesystem hierarchy with tree, folder, and search views
- Alerts — Manage integrity issues and validation failures
- Trends — Visualize historical data with interactive charts
Utility — operational and investigative pages:
- History — Scan and task activity log
- Roots — Add, remove, and scan monitored directories
- Schedules — Create and manage automated scan schedules
- Data Explorer — Query interface for advanced data analysis
- Settings — Edit configuration, view database stats and system info
Sidebar
The sidebar can be collapsed to icon-only mode for more screen space:
- Click the collapse button in the sidebar footer
- Use the keyboard shortcut Cmd+B (macOS) or Ctrl+B (Windows/Linux)
- Click the sidebar’s right edge (rail) to toggle
- The sidebar automatically collapses on narrower screens and expands on wider ones
When collapsed, hovering over an icon shows a tooltip with the page name.
Shared Root Context
When you select a root on Browse, Alerts, Trends, Schedules, or History, that selection is carried across pages via a URL parameter (?root_id=N). Selecting a root on Browse and then navigating to Alerts automatically pre-selects the same root, so you don’t need to re-select it on every page.
Live Updates
The web interface uses WebSocket connections to provide real-time updates during task execution. When a scan is running, you can watch progress updates, statistics, and phase transitions as they happen. The sidebar footer also shows a compact progress indicator for the active task.
Home
The Home page is the landing page of fsPulse. It answers the most important question at a glance: “Is my data safe?”
Root Health Summary
The centerpiece of the Home page is the root health summary, which shows the status of every monitored directory:
- Root path — The directory being monitored
- Last scan time — When the root was last scanned, with staleness indicators for roots that haven’t been scanned recently
- Open alerts — Count of unresolved alerts (highlighted when non-zero)
- Last scan outcome — Whether the most recent scan completed successfully, stopped, or errored
If all roots show recent scans with zero alerts, you know your data is healthy. If a root shows open alerts, you know to investigate further.
Each root row is clickable — clicking navigates to the Browse page for that root.
When no roots have been configured, the Home page shows a welcome message with a link to the Roots page to add your first root.
Active Task
When a scan or other task is running, the Home page displays a progress card showing:
- Task type and target root
- Current phase (Scanning, Sweeping, Analyzing Files, Analyzing Scan)
- Real-time statistics — files and folders processed, items hashed, items validated
- Progress indicator
Progress updates are delivered in real time via WebSocket. When no task is running, the card shows an idle state with a button to initiate a scan.
Upcoming Tasks
A table shows tasks queued for execution, typically generated from schedules:
- Task type and target root
- Scheduled run time
- Source schedule
Tasks execute sequentially — only one task runs at a time.
Recent Activity
A compact summary of recent scan and task activity, showing the last several completed operations with their outcomes. For the full activity log, click the View All link to navigate to the History page.
Scan Now
Click the scan button on the active task card to initiate a scan. You’ll select:
- Which root to scan
- Hashing mode (None, Hash changed items, Hash all items)
- Validation mode (None, Validate changed items, Validate all items)
Pause / Resume
fsPulse supports pausing all task execution. When paused:
- A banner appears showing how long tasks have been paused and when the pause will expire (if a duration was set)
- No new tasks will start until resumed
- You can edit the pause duration or resume immediately
This is useful when you want to temporarily prevent scans from running (for example, during a backup window or heavy system load).
Browse
The Browse page is an investigation workbench for navigating your filesystem hierarchy, comparing states across time, and inspecting individual items in detail.
Browse Cards
Each browse card is a self-contained browsing environment with its own root selection, scan selection, view mode, and optional detail panel.
Root and Scan Selection
At the top of each card:
- Root picker: Select which root directory to browse. If you navigated here from another page with a root selected, it will be pre-selected via the shared root context.
- Scan bar: Shows the current scan (e.g., “Scan #42 — 15 Jan at 10:30”) with a calendar button to pick a different scan and a “Latest” button to jump to the most recent scan.
The calendar picker highlights dates that have scans and disables dates without scans. When you select a date, it shows all scans for that day with their change summaries (adds, modifications, deletions). If you select a date with no scans, the nearest available scan is shown.
View Modes
Three view modes are available via tabs:
Tree View
A hierarchical expand/collapse tree showing the filesystem structure at the selected scan point:
- Click the chevron on folders to expand or collapse
- Click an item name to select it and open the detail panel
- Children are loaded on demand for performance
- Folder expansion state is preserved when switching between scans or when the tree refreshes
- Deleted items shown with strikethrough and a trash icon when “Show deleted” is enabled
- Items are color-coded by change type: green (added), blue (modified), red (deleted), gray (unchanged)
Folder View
A flat, breadcrumb-navigated view similar to a file explorer:
- Breadcrumb ribbon at the top shows the current path — click any segment to navigate up
- Sortable columns: Name, Size, Modified Date — click column headers to sort (ascending/descending)
- Folder navigation: Click a folder’s icon to navigate into it
- Item selection: Click an item’s name to select it for the detail panel
- Directories are always sorted first, then by the selected column
- When switching from Folder view to Tree view, the tree automatically reveals and expands to your current folder location
Search View
A text search across all items in the selected root and scan:
- Type in the search box to filter items by path (with debounce)
- Results appear as a flat list with parent path context shown below each item name
- Click any result to select it for the detail panel
- Typing in the search box automatically switches to the Search tab
Integrity Filters
A collapsible filter panel lets you narrow the view by integrity status across three dimensions:
Change Kind: Filter by Added, Modified, Deleted, or Unchanged items.
Hash State: Filter by Baseline, Unknown, or Suspect hash states. Folders are shown if they contain descendants matching the selected states.
Validation State: Filter by Valid, Invalid, Unknown, or No Validator states. Works the same as hash state filtering with descendant logic for folders.
Items must pass all active filter dimensions to be visible. Active filters are indicated with visual highlights on the filter buttons.
Controls
- Show deleted: Toggle to include or hide deleted items across all views
- Each view mode maintains its own independently selected item — switching tabs preserves your selection in each
Detail Panel
Clicking any item opens an inline detail panel within the card. The panel can be positioned on either side of the card using the flip button.
Item Information
- File/folder type label and icon
- File size (formatted compactly)
- Modification time
- Path with deletion status indicator
- First seen scan and total version count
File Integrity (Files Only)
- Hash state: Displayed as Baseline, Suspect, or Unknown with a colored icon. Click to expand/collapse the full SHA-256 hash value.
- Validation state: Displayed as Valid, Invalid, Unknown, or No Validator. If invalid, the validation error message is shown.
Directory Children Counts (Directories Only)
- Immediate file and folder counts
- Change breakdown: added (green), modified (blue), deleted (red), unchanged (gray)
- Integrity breakdown: suspect hash count, invalid item count
Size History
- Interactive line chart showing how the item’s size has changed over time
- Configurable time window: 7 days, 30 days, 3 months, 6 months, 1 year
Version History
- Chronological list of all item versions, each showing its change type (Initial, Modified, Deleted, Restored)
- Each version displays its temporal range (first scan to last scan)
- Expand a version to see exactly what changed: modification date, size, hash, access state, validation state, hash state
- For directories: shows changes in descendant counts and integrity counts
- The version corresponding to the currently viewed scan is highlighted with an eye icon
- Load older versions on demand
Alerts
- Any alerts associated with this item (suspect hashes, validation failures, access errors)
- Each alert shows its type, timestamp, and details
- Alert status is editable directly from the panel — use the dropdown to change between Open, Flagged, and Dismissed without leaving the Browse page
- Load more alerts on demand
Close the detail panel by clicking the close button.
Comparison Mode
Click Show Compare in the page header to open a second browse card side by side. Each card operates independently — you can:
- Compare across time: Same root, different scans — see how the filesystem changed
- Compare across roots: Different roots (e.g., original vs. backup) at the same or different scans
When both cards have detail panels open, the panels appear adjacent in the center for easy comparison (Card A’s panel is on the right, Card B’s panel is on the left).
Close the second card by clicking Show Compare again.
Use Cases
- Investigation: Drill into specific files when alerts are triggered — click an alert’s item, then inspect version history and validation errors
- Capacity analysis: Use Folder view sorted by Size to find what’s consuming space
- Change review: Browse at two scan points in comparison mode to see exactly what changed
- Integrity audit: Use hash state and validation state filters to focus on files with suspect or invalid states
- Verification: Check hash and validation status of critical files in the detail panel
- Point-in-time browsing: Use the scan picker to see the filesystem as it was at any past scan
Alerts
The Alerts page provides a centralized view for managing integrity issues detected during scans.
Alert Types
fsPulse generates three types of alerts:
Access Denied
Triggered when fsPulse is unable to access an item or folder. These alerts can occur during either the scan phase or the analysis phase:
During Scan Phase:
- Unable to retrieve item metadata (type, size, or modification date)
- Unable to enumerate folder contents (typically due to permission restrictions)
During Analysis Phase:
- Unable to read a file for hashing or validation
Notes:
- If fsPulse cannot determine an item’s type from metadata, the item is recorded as an instance of the “Unknown” type
- Items with failed metadata retrieval, whether “Unknown” or otherwise, are not examined during the analysis phase
Suspect Hash Changes
Triggered when:
- A file’s hash changes between scans
- The file’s modification time has NOT changed
This pattern indicates potential:
- Bit rot (silent data corruption)
- Tampering or malicious modification
- Filesystem anomalies
Invalid Items
Triggered when format validation fails:
- FLAC audio files with invalid structure
- JPEG/PNG images that fail format checks
- PDF files with corruption
- Other validated file types with detected issues
See Validators for details on supported file types.
Alert Status
Each alert can be in one of three states:
- Open: New alert requiring attention
- Flagged: Marked for follow-up or further investigation
- Dismissed: Reviewed and determined to be non-critical
Managing Alerts
Filtering
Filter alerts by:
- Status (Open/Flagged/Dismissed)
- Alert type (Access denied/Hash change/Validation failure)
- Root
- Time range
- Path search
Tip: If you select a root on the Browse or Trends page before navigating to Alerts, the same root will be pre-selected automatically via the shared root context.
Status Actions
- Flag: Mark alert for follow-up
- Dismiss: Acknowledge and close the alert
- Reopen: Change dismissed alert back to open
Batch Operations
Select multiple alerts to update status in bulk.
Alert Details
Click an alert to view:
- Item path and metadata
- Alert timestamp
- Access error details (for access denied alerts)
- Hash change details: previous and new hash values (for suspect hash alerts)
- Validation error message (for invalid items)
- Link to item in Browse view
Integration with Browse
Alerts are also displayed in the Browse page’s item detail panel, providing context when investigating specific files. You can change alert status directly from the Browse detail panel without navigating to the Alerts page.
Workflow Recommendations
- Review Open Alerts: Check the Home page for alert counts, then navigate here
- Investigate: Use Browse to examine affected items
- Triage: Flag important issues, dismiss false positives
- Restore: Use backups to restore corrupted files if needed
- Track: Monitor alert trends on the Trends page
Trends
The Trends page provides interactive visualizations showing how your data evolves over time across multiple scans.
Available Charts
File Size Trends
Track total storage usage over time:
- See growth or reduction in directory sizes
- Identify storage bloat
- Displayed in both decimal (GB) and binary (GiB) units
File/Folder Count Trends
Monitor the number of items:
- Total files and folders over time
- Detect unexpected additions or deletions
- Separate trend lines for files vs. directories
Change Activity
Visualize filesystem activity:
- Additions, modifications, and deletions per scan
- Identify periods of high change
- Understand modification patterns
Alert Trends
Track integrity issues over time:
- Validation failures
- Suspect hash changes
- Alert resolution patterns
Features
Root Selection
Select which scan root to analyze from the dropdown. Each root maintains independent trend data.
Tip: If you select a root on the Browse or Alerts page before navigating to Trends, the same root will be pre-selected automatically via the shared root context.
Date Range Filtering
Customize the time window:
- Last 7 days
- Last 30 days
- Last 3 months
- Last 6 months
- Last year
- Custom range (manual date pickers)
Baseline Exclusion
The Changes and Alerts charts offer a checkbox to exclude the first scan from the visualization. The first scan of a root often shows large numbers of “additions” and alerts which can skew trend visualizations. This toggle only appears when the first scan falls within the selected date range.
Interactive Charts
- Hover for detailed values
- Pan and zoom on time ranges
- Toggle data series on/off
Requirements
Trend analysis requires multiple scans of the same root. After your first scan, you’ll see a message prompting you to run additional scans to generate trend data.
Use Cases
- Capacity Planning: Monitor storage growth rates
- Change Detection: Identify unusual modification patterns
- Validation Monitoring: Track data integrity over time
- Baseline Comparison: See how your filesystem evolves from initial state
History
The History page provides a complete log of scan and task activity — answering questions like “Did my scheduled scan run?” and “What happened with that scan that errored?”
Activity Log
The History page shows a paginated table of completed scans and tasks with key information:
- Scan ID and associated root
- Start and end times
- State (Completed, Stopped, Error)
- Item counts — files, folders, and total size discovered
- Change counts — additions, modifications, and deletions detected
- Alert count — validation failures and suspect hash changes generated
- Scan options — whether hashing and validation were enabled
Filtering
Filter the activity log by:
- Root — Show activity for a specific monitored directory
- Time range — Narrow to a date range
- Outcome — Filter by completed, stopped, or errored
Use Cases
- Schedule verification: Confirm that scheduled scans ran as expected
- Troubleshooting: Identify scans that stopped or errored, and review their details
- Trend awareness: Notice patterns in change counts or alert frequency across scans
- Audit trail: Review what the system has been doing over any time period
Tip: The Home page shows a compact summary of recent activity. Use the History page when you need the full, filterable log.
Data Explorer
The Data Explorer provides both visual query building and free-form query capabilities for analyzing your fsPulse data. It is located in the utility section of the sidebar, designed for power users who need detailed data access beyond what the primary pages offer.
Overview
Data Explorer offers two ways to query your data:
- Structured tabs (Roots, Scans, Items, Versions, Alerts) — Visual query builder with column selection, sorting, and filtering
- Query tab — Free-form query entry using fsPulse’s query language
Structured Query Tabs
The Roots, Scans, Items, Versions, and Alerts tabs provide a visual interface for building queries without writing query syntax.
Layout
Each structured tab displays:
- Column selector panel (left) — Configure which columns to display and how
- Results table (right) — View query results with pagination
Column Controls
The column selector provides several controls for each available column:
| Control | Description |
|---|---|
| Checkbox | Show or hide the column in results |
| Drag handle | Reorder columns by dragging |
| Sort | Click to cycle through ascending, descending, or no sort |
| Filter | Add a filter condition for this column |
Working with Columns
Show/Hide Columns: Check or uncheck the box next to any column name to include or exclude it from results.
Reorder Columns: Drag columns using the grip handle to change the display order in the results table.
Sort Results: Click the sort control to cycle through no sort, ascending, and descending. Only one column can be sorted at a time.
Filter Data: Click the filter button to add a filter condition. Active filters display as badges showing the filter value. Click the X to remove a filter.
Reset: Click the reset button in the column header to restore all columns to their default visibility, order, and clear all filters and sorts.
Query Tab
The Query tab provides a free-form interface for writing queries using fsPulse’s SQL-inspired query language.
Features
- Query input — Text area for entering queries
- Execute — Run the query (or press Cmd/Ctrl + Enter)
- Example queries — Expandable section with clickable sample queries
- Documentation link — Quick access to the full query syntax reference
- Results table — Paginated results display
Example Queries
The Query tab includes sample queries you can click to populate the input:
items limit 10
items where item_type:(F) show item_path, size limit 25
items where item_type:(F), size:(>1000000) show item_path, size order by size desc limit 20
alerts where alert_status:(O) show alert_type, item_path, created_at limit 15
versions where item_id:(42) order by first_scan_id
Query Domains
Both interfaces support querying five data domains:
| Domain | Description |
|---|---|
| roots | Configured scan roots |
| scans | Scan metadata and statistics |
| items | Files and folders with their latest version state |
| versions | Item version history — one row per distinct state |
| alerts | Integrity issues and validation failures |
When to Use Each Interface
Use structured tabs when:
- Exploring data without knowing the exact query syntax
- Quickly toggling columns to find relevant information
- Building simple filters and sorts visually
Use the Query tab when:
- Writing complex queries with multiple conditions
- Using advanced query features (comparisons, ranges, multiple filters)
- Reproducing a specific query you’ve used before
- Learning the query syntax with immediate feedback
Query Syntax
For complete documentation on the query language including all operators, column names, and advanced features, see Query Syntax.
Roots, Schedules & Settings
fsPulse provides three dedicated pages for configuration: Roots for managing monitored directories, Schedules for automated scan timing, and Settings for application configuration and system information.
Roots
Adding a Root
- Click Add Root
- Enter the full filesystem path to monitor
- Optionally provide a friendly name
- Save
Managing Roots
- Scan Now: Create a manual scan task for the root
- Delete: Remove the root (also removes associated schedules)
- View root statistics and last scan time
Schedules
fsPulse supports flexible scheduling options for automated monitoring.
Schedule Types
- Daily: Run at a specific time each day
- Weekly: Run on specific days of the week at a chosen time
- Monthly: Run on a specific day of the month
- Interval: Run every N hours/minutes
Creating a Schedule
- Click Add Schedule
- Select the root to scan
- Choose schedule type and timing
- Configure scan options:
- Enable hashing (default: all files)
- Enable validation (default: new/changed files)
- Save
Schedule Management
- Enable/Disable: Temporarily pause schedules without deleting them
- Edit: Modify timing or scan options
- Delete: Remove the schedule
Scans and other tasks are queued and executed sequentially to prevent resource conflicts. You can view upcoming and running tasks on the Home page.
Configuration
A table displays all configurable settings with their values from each source:
| Column | Description |
|---|---|
| Setting | The setting name |
| Default | Built-in default value |
| Config File | Value from config.toml (if set) |
| Environment | Value from environment variable (if set) |
The active value (the one fsPulse actually uses) is highlighted with a green border. Configuration precedence follows: Environment variable > Config file > Default.
Settings that require a server restart to take effect are marked with a restart indicator.
Editing Settings
Click Edit on any setting to modify its value in config.toml. The edit dialog provides:
- Input validation (numeric ranges, valid log levels, etc.)
- A Delete option to remove the setting from
config.tomland revert to the default
See Configuration for details on all available settings and environment variables.
Database
Shows database statistics and maintenance tools:
- Database path: Location of the
fspulse.dbfile - Total size: Current size of the database file
- Wasted space: Space that can be reclaimed through compaction
Compact Database: Over time, deletions and updates leave unused space in the SQLite database. The Compact Database button runs SQLite’s VACUUM command to reclaim this space. This operation runs as a background task.
About
Displays application metadata:
- fsPulse version
- Database schema version
- Build date and git commit information
- Links to documentation and the GitHub repository
Command-Line Interface
fsPulse is a web-first application. The CLI exists solely to launch the web server—all functionality including scanning, querying, browsing, and configuration is accessed through the Interface.
Starting fsPulse
To start the fsPulse server:
fspulse
Or explicitly:
fspulse serve
Both commands are equivalent. The server starts on http://127.0.0.1:8080 by default.
Once running, open your browser to access the full web interface for:
- Managing scan roots and schedules
- Running and monitoring scans
- Browsing your filesystem data
- Querying and exploring results
- Managing alerts
Configuration
fsPulse behavior is configured through environment variables or a config file, not command-line flags.
Environment Variables
Set these before running fspulse:
# Server settings
export FSPULSE_SERVER_HOST=0.0.0.0 # Bind address (default: 127.0.0.1)
export FSPULSE_SERVER_PORT=9090 # Port number (default: 8080)
# Analysis settings
export FSPULSE_ANALYSIS_THREADS=16 # Worker threads (default: 8)
# Logging
export FSPULSE_LOGGING_FSPULSE=debug # Log level (default: info)
# Data location
export FSPULSE_DATA_DIR=/custom/path # Data directory override
fspulse
Configuration File
fsPulse also reads from config.toml in the data directory. See Configuration for complete documentation including:
- All available settings
- Environment variable reference
- Platform-specific data directory locations
- Docker configuration
Getting Help
View version and basic usage:
fspulse --help
fspulse --version
Related Documentation
- Configuration — Complete configuration reference
- Interface — Guide to all UI features
- Docker Deployment — Running fsPulse in Docker
Query Syntax
fsPulse provides a flexible, SQL-like query language for exploring scan results. This language supports filtering, custom column selection, ordering, and limiting the number of results.
Query Structure
Each query begins with one of the five supported domains:
rootsscansitemsversionsalerts
You can then add any of the following optional clauses:
DOMAIN [WHERE ...] [SHOW ...] [ORDER BY ...] [LIMIT ...] [OFFSET ...]
Column Availability
Each domain has a set of available columns. Columns marked as default are shown when no SHOW clause is specified.
roots Domain
| Column | Type | Default |
|---|---|---|
root_id | Integer | Yes |
root_path | Path | Yes |
scans Domain
| Column | Type | Default | Description |
|---|---|---|---|
scan_id | Integer | Yes | Unique scan identifier |
root_id | Integer | Yes | Root directory identifier |
schedule_id | Integer | Yes | Schedule identifier (null for manual scans) |
started_at | Date | Yes | Timestamp when scan started |
ended_at | Date | Yes | Timestamp when scan ended (null if incomplete) |
was_restarted | Boolean | Yes | True if scan was resumed after restart |
scan_state | Scan State Enum | No | State of the scan |
is_hash | Boolean | Yes | Hash new or changed files |
hash_all | Boolean | No | Hash all items including unchanged |
is_val | Boolean | Yes | Validate new or changed files |
file_count | Integer | Yes | Count of files found in the scan |
folder_count | Integer | Yes | Count of directories found in the scan |
total_size | Integer | Yes | Total size in bytes of all files |
alert_count | Integer | Yes | Number of alerts created during the scan |
add_count | Integer | Yes | Number of items added in the scan |
modify_count | Integer | Yes | Number of items modified in the scan |
delete_count | Integer | Yes | Number of items deleted in the scan |
val_unknown_count | Integer | No | Files with unknown validation state |
val_valid_count | Integer | No | Files with valid validation state |
val_invalid_count | Integer | No | Files with invalid validation state |
val_no_validator_count | Integer | No | Files with no available validator |
hash_unknown_count | Integer | No | Files with unknown hash state |
hash_baseline_count | Integer | No | Files with baseline hash state |
hash_suspect_count | Integer | No | Files with suspect hash state |
error | String | No | Error message if scan failed |
items Domain
The items domain queries each item’s latest version — the most recent known state. Identity columns come from the items table; state columns come from the item’s current version.
| Column | Type | Default | Description |
|---|---|---|---|
item_id | Integer | Yes | Unique item identifier |
root_id | Integer | Yes | Root directory identifier |
item_path | Path | Yes | Full path of the item |
item_name | Path | No | Filename or directory name (last segment) |
item_type | Item Type Enum | Yes | File, Directory, Symlink, or Unknown |
version_id | Integer | No | Current version identifier |
first_scan_id | Integer | No | Scan where current version first appeared |
last_scan_id | Integer | Yes | Last scan confirming current state |
is_deleted | Boolean | Yes | True if item is currently deleted |
access | Access Status | No | Access state (NoError, MetaError, ReadError) |
mod_date | Date | Yes | Last modification date |
size | Integer | No | File size in bytes |
last_val_scan | Integer | No | Last scan that evaluated validation (files only; null for folders) |
val_state | Validation Status | No | Validation state (files only; null for folders) |
val_error | String | No | Validation error message (files only; null for folders) |
last_hash_scan | Integer | No | Last scan that evaluated the hash (files only; null for folders) |
file_hash | String | No | SHA-256 content hash (files only; null for folders) |
hash_state | Hash State | No | Hash integrity state (files only; null for folders) |
versions Domain
The versions domain queries individual item version rows — each representing a distinct state of an item over a temporal range. Use this domain to explore item history and state changes.
| Column | Type | Default | Description |
|---|---|---|---|
version_id | Integer | Yes | Unique version identifier |
item_id | Integer | Yes | Item this version belongs to |
root_id | Integer | Yes | Root directory identifier |
item_path | Path | No | Full path of the item |
item_name | Path | No | Filename or directory name (last segment) |
item_type | Item Type Enum | Yes | File, Directory, Symlink, or Unknown |
first_scan_id | Integer | Yes | Scan where this version was first observed |
last_scan_id | Integer | Yes | Last scan confirming this version’s state |
is_deleted | Boolean | Yes | True if item was deleted in this version |
access | Access Status | No | Access state |
mod_date | Date | No | Last modification date |
size | Integer | No | File size in bytes |
last_val_scan | Integer | No | Last scan that evaluated validation (files only; null for folders) |
val_state | Validation Status | No | Validation state (files only; null for folders) |
val_error | String | No | Validation error message (files only; null for folders) |
last_hash_scan | Integer | No | Last scan that evaluated the hash (files only; null for folders) |
file_hash | String | No | SHA-256 content hash (files only; null for folders) |
hash_state | Hash State | No | Hash integrity state (files only; null for folders) |
add_count | Integer | No | Descendant items added (folders only; null for files) |
modify_count | Integer | No | Descendant items modified (folders only; null for files) |
delete_count | Integer | No | Descendant items deleted (folders only; null for files) |
unchanged_count | Integer | No | Descendant items unchanged (folders only; null for files) |
val_unknown_count | Integer | No | Descendant files with unknown validation (folders only) |
val_valid_count | Integer | No | Descendant files with valid validation (folders only) |
val_invalid_count | Integer | No | Descendant files with invalid validation (folders only) |
val_no_validator_count | Integer | No | Descendant files with no validator (folders only) |
hash_unknown_count | Integer | No | Descendant files with unknown hash state (folders only) |
hash_baseline_count | Integer | No | Descendant files with baseline hash state (folders only) |
hash_suspect_count | Integer | No | Descendant files with suspect hash state (folders only) |
alerts Domain
| Column | Type | Default | Description |
|---|---|---|---|
alert_id | Integer | No | Unique alert identifier |
alert_type | Alert Type Enum | Yes | Type of alert |
alert_status | Alert Status Enum | Yes | Current status (Open, Flagged, Dismissed) |
root_id | Integer | No | Root directory identifier |
scan_id | Integer | No | Scan that generated the alert |
item_id | Integer | No | Item the alert is about |
item_path | Path | Yes | Path of the affected item |
created_at | Date | Yes | When the alert was created |
updated_at | Date | No | When the alert status was last changed |
prev_hash_scan | Integer | No | Previous hash scan (for suspect hash) |
hash_old | String | No | Previous hash value |
hash_new | String | No | New hash value |
val_error | String | Yes | Validation error message |
The WHERE Clause
The WHERE clause filters results using one or more filters. Each filter has the structure:
column_name:(value1, value2, ...)
Values must match the column’s type. You can use individual values, ranges (when supported), or a comma-separated combination.
| Type | Examples | Notes |
|---|---|---|
| Integer | 5, 1..5, 3, 5, 7..9, > 1024, < 10, null, not null | Supports ranges, comparators, and nullability. Ranges are inclusive. |
| Date | 2024-01-01, 2024-01-01..2024-06-30, null, not null | Use YYYY-MM-DD. Ranges are inclusive. |
| Boolean | true, false, T, F, null, not null | Unquoted. |
| String | 'example', 'error: missing EOF', null, not null | Quoted strings. |
| Path | 'photos/reports', 'file.txt' | Must be quoted. Null values are not supported. |
| Validation Status | V, I, N, U, null, not null | Valid, Invalid, No Validator, Unknown. Null for folders. Unquoted. |
| Hash State | V, S, U, null, not null | Valid, Suspect, Unknown. Null for folders. Unquoted. |
| Item Type Enum | F, D, S, U | File, Directory, Symlink, Unknown. Unquoted. |
| Alert Type Enum | H, I, A | Suspect Hash, Invalid Item, Access Denied. Unquoted. |
| Alert Status Enum | O, F, D | Open, Flagged, Dismissed. Unquoted. |
| Scan State Enum | S, W, AF, AS, C, P, E | Scanning, Sweeping, Analyzing Files, Analyzing Scan, Completed, Stopped, Error. A is shorthand for AF. Unquoted. |
| Access Status | N, M, R | No Error, Meta Error, Read Error. Unquoted. |
Combining Filters
When specifying multiple values within a single filter, the match is logically OR. When specifying multiple filters across different columns, the match is logically AND.
For example:
scans where started_at:(2025-01-01..2025-01-07, 2025-02-01..2025-02-07), is_hash:(T)
This query matches scans that:
- Occurred in either the first week of January 2025 or the first week of February 2025
- AND were performed with hashing enabled
The SHOW Clause
The SHOW clause controls which columns are displayed and how some of them are formatted. If omitted, a default column set is used.
You may specify:
- A list of column names
- The keyword
defaultto insert the default set - The keyword
allto show all available columns
Formatting modifiers can be applied using the @ symbol:
item_path@name, mod_date@short
Format Specifiers by Type
| Type | Allowed Format Modifiers |
|---|---|
| Date | full, short, timestamp |
| Path | full, relative, short, name |
| Validation / Hash State / Enum / Boolean | full, short |
| Integer / String | (no formatting options) |
The timestamp format modifier converts dates to UTC timestamps (seconds since Unix epoch), which is useful for programmatic processing or web applications that need to format dates in the user’s local timezone.
The ORDER BY Clause
Specifies sort order for the results:
items order by mod_date desc, item_path asc
If direction is omitted, ASC is assumed.
The LIMIT and OFFSET Clauses
LIMIT restricts the number of rows returned. OFFSET skips a number of rows before returning results.
items limit 50 offset 100
Examples
# Items whose path contains 'reports'
items where item_path:('reports')
# Large files sorted by size
items where item_type:(F), size:(> 1048576) show default, size order by size desc
# Version history for a specific item
versions where item_id:(42) order by first_scan_id
# Deleted items across all roots
items where is_deleted:(true)
# Versions with validation failures
versions where val_state:(I) show default, val_error order by first_scan_id desc
# Items with suspect hash state
items where hash_state:(S) show default, hash_state, file_hash order by item_path
# Open or flagged alerts for suspect hashes
alerts where alert_type:(H), alert_status:(O, F) order by created_at desc
# Scans with timestamps for programmatic processing
scans show scan_id, started_at@timestamp, file_count order by started_at desc limit 10
# Scans with change and alert counts
scans show scan_id, file_count, total_size, add_count, modify_count, delete_count, alert_count order by started_at desc
See also: Data Explorer · Validators · Configuration
Configuration
fsPulse supports persistent, user-defined configuration through a file named config.toml. This file allows you to control logging behavior, analysis settings, server configuration, and more.
Web UI: Most configuration settings can also be viewed and edited through the Settings page in the web interface, which shows the active value and its source (default, config file, or environment variable).
📦 Docker Users: If you’re running fsPulse in Docker, see the Docker Deployment chapter for Docker-specific configuration including environment variable overrides and volume management.
Finding config.toml
The config.toml file is stored in fsPulse’s data directory. The location depends on how you’re running fsPulse:
Docker Deployments
When running in Docker, the data directory is /data, so the config file is located at /data/config.toml inside the container. fsPulse automatically creates this file with default settings on first run.
To access it from your host machine:
# View the config
docker exec fspulse cat /data/config.toml
# Extract to edit
docker exec fspulse cat /data/config.toml > config.toml
See the Docker Deployment chapter for details on editing the config in Docker.
Native Installations
fsPulse uses the directories crate to determine the platform-specific data directory location:
| Platform | Data Directory Location | Example Path |
|---|---|---|
| Linux | $XDG_DATA_HOME/fspulse or $HOME/.local/share/fspulse | /home/alice/.local/share/fspulse |
| macOS | $HOME/Library/Application Support/fspulse | /Users/alice/Library/Application Support/fspulse |
| Windows | %LOCALAPPDATA%\fspulse\data | C:\Users\Alice\AppData\Local\fspulse\data |
The config file is located at <data_dir>/config.toml.
On first run, fsPulse automatically creates the data directory and writes a default config.toml if one doesn’t exist.
Tip: You can delete
config.tomlat any time to regenerate it with defaults. Newly introduced settings will not automatically be added to an existing file.Override: The data directory location can be overridden using the
FSPULSE_DATA_DIRenvironment variable. See Data Directory and Database Settings for details.
Configuration Settings
Here are the current available settings and their default values:
[logging]
fspulse = "info"
lopdf = "error"
[server]
port = 8080
host = "127.0.0.1"
[analysis]
threads = 8
Logging
fsPulse uses the Rust log crate, and so does the PDF validation crate lopdf. You can configure logging levels independently for each subsystem in the [logging] section.
Supported log levels:
error– only critical errorswarn– warnings and errorsinfo– general status messages (default for fsPulse)debug– verbose output for debuggingtrace– extremely detailed logs
Log File Behavior
- Logs are written to
<data_dir>/logs/ - Each run of fsPulse creates a new log file, named using the current date and time
- Individual log files are capped at 50 MB; if a single run exceeds this, it continues in a new file
- fsPulse retains up to 20 log files; older files are automatically deleted
Server Settings
The [server] section controls the web UI server behavior when running fspulse serve.
host: IP address to bind to (default:127.0.0.1)127.0.0.1- Localhost only (secure, only accessible from same machine)0.0.0.0- All interfaces (required for Docker, remote access)
port: Port number to listen on (default:8080)
Note: In Docker deployments, the host should be 0.0.0.0 to allow access from outside the container. The Docker image sets this automatically via environment variable.
Analysis Settings
The [analysis] section controls how many threads are used during the analysis phase of scanning (for hashing and validation).
threads: number of worker threads (default:8)
You can adjust this based on your system’s CPU count or performance needs. fsPulse uses SHA-256 for file hashing to detect content changes and verify integrity.
Environment Variables
All configuration settings can be overridden using environment variables. This is particularly useful for:
- Docker deployments where editing files is inconvenient
- Different environments (development, staging, production) with different settings
- NAS deployments (TrueNAS, Unraid) using web-based configuration UIs
- CI/CD pipelines where configuration is managed externally
How It Works
Environment variables follow the pattern: FSPULSE_<SECTION>_<FIELD>
The <SECTION> corresponds to a section in config.toml (like [server], [logging], [analysis]), and <FIELD> is the setting name within that section.
Precedence (highest to lowest):
- Environment variables - Override everything
- config.toml - User-defined settings
- Built-in defaults - Fallback values
This allows you to set sensible defaults in config.toml and override them as needed per deployment.
Complete Variable Reference
Server Settings
Control the web UI server behavior (when running fspulse serve):
| Variable | Default | Valid Values | Description |
|---|---|---|---|
FSPULSE_SERVER_HOST | 127.0.0.1 | IP address | Bind address. Use 0.0.0.0 for Docker/remote access, 127.0.0.1 for localhost only |
FSPULSE_SERVER_PORT | 8080 | 1-65535 | Web UI port number |
Examples:
# Native - serve only on localhost
export FSPULSE_SERVER_HOST=127.0.0.1
export FSPULSE_SERVER_PORT=8080
fspulse serve
# Docker - must bind to all interfaces
docker run -e FSPULSE_SERVER_HOST=0.0.0.0 -e FSPULSE_SERVER_PORT=9090 -p 9090:9090 ...
Logging Settings
Configure log output verbosity:
| Variable | Default | Valid Values | Description |
|---|---|---|---|
FSPULSE_LOGGING_FSPULSE | info | error, warn, info, debug, trace | fsPulse application log level |
FSPULSE_LOGGING_LOPDF | error | error, warn, info, debug, trace | PDF library (lopdf) log level |
Examples:
# Enable debug logging
export FSPULSE_LOGGING_FSPULSE=debug
export FSPULSE_LOGGING_LOPDF=error
# Docker
docker run -e FSPULSE_LOGGING_FSPULSE=debug ...
Analysis Settings
Configure scan behavior and performance:
| Variable | Default | Valid Values | Description |
|---|---|---|---|
FSPULSE_ANALYSIS_THREADS | 8 | 1-24 | Number of worker threads for analysis phase (hashing/validation) |
Examples:
# Use 16 threads for faster scanning
export FSPULSE_ANALYSIS_THREADS=16
# Docker
docker run -e FSPULSE_ANALYSIS_THREADS=16 ...
Data Directory and Database Settings
Control where fsPulse stores its data:
| Variable | Default | Valid Values | Description |
|---|---|---|---|
FSPULSE_DATA_DIR | Platform-specific | Directory path | Override the data directory location. Contains config, logs, and database (by default). Cannot be set in config.toml. |
FSPULSE_DATABASE_DIR | <data_dir> | Directory path | Override database directory only (advanced). Stores the database outside the data directory. This is a directory path, not a file path - the database file is always named fspulse.db |
Data Directory:
The data directory contains configuration (config.toml), logs (logs/), and the database (fspulse.db) by default. It is determined by:
FSPULSE_DATA_DIRenvironment variable (if set)- Platform-specific project local directory (default):
- Linux:
$XDG_DATA_HOME/fspulseor$HOME/.local/share/fspulse - macOS:
$HOME/Library/Application Support/fspulse - Windows:
%LOCALAPPDATA%\fspulse\data - Docker:
/data
- Linux:
Database Location:
By default, the database is stored in the data directory as fspulse.db. You can override this to store the database separately:
Database Directory Precedence:
FSPULSE_DATABASE_DIRenvironment variable (if set) - highest priorityconfig.toml[database]dirsetting (if configured)- Data directory (from
FSPULSE_DATA_DIRor platform default)
Important Notes:
- The database file is always named
fspulse.dbwithin the determined directory - Configuration and logs always remain in the data directory, even if the database is moved
- For Docker: it’s recommended to use volume/bind mounts to
/datarather than overridingFSPULSE_DATA_DIR
Docker-Specific Variables
These variables are specific to Docker deployments:
| Variable | Default | Valid Values | Description |
|---|---|---|---|
PUID | 1000 | UID number | User ID to run fsPulse as (for NAS permission matching) |
PGID | 1000 | GID number | Group ID to run fsPulse as (for NAS permission matching) |
TZ | UTC | Timezone string | Timezone for log timestamps and UI (e.g., America/New_York) |
See Docker Deployment - NAS Deployments for details on PUID/PGID usage.
Usage Examples
Native (Linux/macOS/Windows):
# Set environment variables
export FSPULSE_SERVER_PORT=9090
export FSPULSE_LOGGING_FSPULSE=debug
export FSPULSE_ANALYSIS_THREADS=16
# Run fsPulse (uses env vars)
fspulse serve
Docker - Command Line:
docker run -d \
--name fspulse \
-e FSPULSE_SERVER_PORT=9090 \
-e FSPULSE_LOGGING_FSPULSE=debug \
-e FSPULSE_ANALYSIS_THREADS=16 \
-p 9090:9090 \
-v fspulse-data:/data \
gtunesdev/fspulse:latest
Docker Compose:
services:
fspulse:
image: gtunesdev/fspulse:latest
environment:
- FSPULSE_SERVER_PORT=9090
- FSPULSE_LOGGING_FSPULSE=debug
- FSPULSE_ANALYSIS_THREADS=16
ports:
- "9090:9090"
Verifying Environment Variables
To see what environment variables fsPulse is using:
Native:
env | grep FSPULSE_
Docker:
docker exec fspulse env | grep FSPULSE_
Docker Configuration
When running fsPulse in Docker, configuration is managed slightly differently. The config file lives at /data/config.toml inside the container, and you have several options for customizing settings.
For step-by-step instructions on configuring fsPulse in Docker, including editing config files and using environment variables, see the Docker Deployment - Configuration section.
New Settings and Restoring Defaults
fsPulse may expand its configuration options over time. When new settings are introduced, they won’t automatically appear in your existing config.toml. To take advantage of new options, either:
- Manually add new settings to your config file
- Delete the file to allow fsPulse to regenerate it with all current defaults
Validators
fsPulse can optionally validate file contents during the analysis phase of a scan. To enable validation, configure it in the web UI when setting up or initiating a scan.
Validation allows fsPulse to go beyond basic metadata inspection and attempt to decode the file’s contents using format-specific logic. This helps detect corruption or formatting issues in supported file types.
Validation Status Codes
Each file in the database has an associated validation status (folders do not have a validation state):
| Status Code | Meaning |
|---|---|
U | Unknown — item has never been included in a validation scan |
V | Valid — most recent validation scan found no issues |
I | Invalid — validation failed; see validation_error field |
N | No Validator — fsPulse does not currently support this file type |
The
validation_errorfield contains the error message returned by the validator only if the item was marked invalid. This field is empty for valid items or items with no validator.
Note: Some validation “errors” surfaced by the underlying libraries may not indicate corruption, but rather unsupported edge cases or metadata formatting. Always review the error messages before assuming a file is damaged.
Supported Validators
fsPulse relies on external Rust crates for performing format-specific validation. We gratefully acknowledge the work of the developers behind these crates for making them available to the Rust community.
| File Types | Crate | Link |
|---|---|---|
FLAC audio (.flac) | claxon | claxon on GitHub |
Images (.jpg, .jpeg, .png, .gif, .tiff, .bmp) | image | image on GitHub |
PDF documents (.pdf) | lopdf | lopdf on GitHub |
Validation support may expand in future versions of fsPulse to cover additional file types such as ZIP archives, audio metadata, or XML/JSON files.
$1 See the Query Syntax page for full details on query clauses and supported filters.
Advanced Topics
This section covers advanced concepts and technical details about fsPulse’s internal operation.
Contents
- Concepts — Core concepts and terminology
- Database Schema — SQLite schema structure and migrations
These topics are useful for developers, contributors, or users who want to understand fsPulse’s architecture more deeply.
Concepts
fsPulse tracks the state of your filesystem over time using a temporal versioning model. The core entities — roots, scans, items, item versions, alerts, schedules, and tasks — form a layered model for understanding how your data evolves.
Root
A root is the starting point for a scan. It represents a specific path on the filesystem that you tell fsPulse to track.
- Paths are stored as absolute paths.
- Each root has a unique ID.
- You can scan a root multiple times over time.
Scan
A scan is a snapshot of a root directory at a specific point in time.
Each scan records:
- The time the scan was started and ended
- Whether hashing and validation were enabled
- Counts of files, folders, and total size discovered
- Counts of additions, modifications, and deletions detected
- Counts of files in each validation state and hash state
- Any alerts generated
Scans are always tied to a root via root_id and are ordered chronologically by started_at.
Item
An item represents the stable identity of a single file or folder discovered during scanning. The items table stores only identity information:
- Root
- Path
- Name (last path segment)
- Type (File, Directory, Symlink, or Unknown)
An item’s mutable state — metadata, hash, validation — is stored in item versions, not in the item row itself.
Item Version
An item version captures the full known state of an item at a point in time. fsPulse uses temporal versioning: instead of maintaining one mutable row per item, the system stores one row per distinct state. A new version is created only when an item’s observable state changes.
Each version contains:
- Temporal range —
first_scan_id(when this state was first observed) andlast_scan_id(last scan where it was confirmed) - Deletion status — whether the item existed or had been deleted
- Access status — whether the item could be read successfully
- Metadata — modification date and size
- Validation state — format validation state and any error message (files only; null for folders)
- Hash and hash state — SHA-256 content hash and hash integrity state (files only; null for folders). See Scanning - Hash States
- Descendant change counts — add, modify, delete, and unchanged counts for child items (folders only; null for files)
- Descendant state counts — counts of descendant files in each validation and hash state (folders only; null for files)
An item that exists unchanged across 50 scans has exactly one version row. You never need to examine multiple versions to reconstruct the current state — each version is a complete snapshot.
Deriving Change Types
Change types are derived by comparing adjacent versions of an item:
- Add: No previous version exists, or the previous version was a deletion
- Delete: This version marks the item as deleted
- Modify: Previous version exists, neither is deleted, and state differs
Alert
An alert flags a potential integrity issue detected during scanning. There are three alert types:
- Suspect Hash — file content hash changed but modification time did not, suggesting bit rot or tampering
- Invalid Item — format validation detected corruption in a supported file type
- Access Denied — fsPulse could not access the item’s metadata or contents
Alerts have statuses (Open, Flagged, Dismissed) for triage workflows.
Schedule
A schedule defines automatic recurring scans. Schedules specify:
- Which root to scan
- Timing (daily, weekly, monthly, or interval-based)
- Scan options (hashing mode, validation mode)
Schedules can be enabled or disabled independently.
Task
A task is a unit of work in the execution queue. Tasks are created from manual scan requests or triggered by schedules. The Home page shows active and upcoming tasks; the History page shows completed tasks.
Tasks can be paused globally, and individual tasks can be stopped while in progress.
Entity Relationships
Root
├── Schedule (recurring scan configuration)
├── Scan (one per execution)
│ └── Alert (integrity issues found)
└── Item (stable identity)
└── Item Version (state at a point in time)
These concepts form the foundation of fsPulse’s scanning, browsing, and query capabilities.
Database
fsPulse uses an embedded SQLite database to store all scan-related data. The database uses a temporal versioning model where item state is tracked through version rows rather than mutable updates.
Database Name and Location
The database file is always named:
fspulse.db
Data Directory
fsPulse uses a data directory to store application data including configuration, logs, and (by default) the database. The data directory location is determined by:
FSPULSE_DATA_DIRenvironment variable (if set) - overrides the default location- Platform-specific default - uses the
directoriescrate’s project local directory:
| Platform | Value | Example |
|---|---|---|
| Linux | $XDG_DATA_HOME/fspulse or $HOME/.local/share/fspulse | /home/alice/.local/share/fspulse |
| macOS | $HOME/Library/Application Support/fspulse | /Users/Alice/Library/Application Support/fspulse |
| Windows | {FOLDERID_LocalAppData}\fspulse\data | C:\Users\Alice\AppData\Local\fspulse\data |
| Docker | /data | /data |
What’s stored in the data directory:
- Configuration file (
config.toml) - Log files (
logs/) - Database file (
fspulse.db) - by default
Note for Docker users: The data directory defaults to /data and can be overridden with FSPULSE_DATA_DIR, but this is generally not recommended since you can map any host directory or Docker volume to /data instead.
Default Database Location
By default, the database is stored in the data directory:
<data_dir>/fspulse.db
For example:
/home/alice/.local/share/fspulse/fspulse.db
Custom Database Location
If you need to store the database outside the data directory (for example, on a different volume or network share), you can override the database directory specifically:
Environment Variable:
export FSPULSE_DATABASE_DIR=/path/to/custom/directory
fspulse serve
Config File (config.toml):
[database]
dir = "/path/to/custom/directory"
In both cases, fsPulse will store the database as fspulse.db inside the specified directory. The filename cannot be changed — only the directory is configurable.
Database Location Precedence:
FSPULSE_DATABASE_DIRenvironment variable (highest priority)[database].dirin config.toml- Data directory (from
FSPULSE_DATA_DIRor platform default)
Important: Configuration and logs always remain in the data directory, even when the database is moved to a custom location.
See the Configuration - Database Settings section for more details.
Schema Overview
The database schema reflects fsPulse’s temporal versioning model:
| Table | Purpose |
|---|---|
roots | Scanned root directories |
scans | Individual scan executions with timing, settings, and summary statistics |
items | Stable identity for each discovered file or folder (path, type, root) |
item_versions | Temporal state — one row per distinct state of an item, with full metadata snapshot |
alerts | Integrity issues (suspect hashes, validation failures, access errors) |
scan_schedules | Recurring scan configurations (timing, options) |
tasks | Work queue entries for scans and other operations |
scan_undo_log | Transient rollback support for in-progress scans |
Temporal Versioning
The items table stores only identity information (root, path, name, type). All mutable state lives in item_versions, where each row represents a distinct state with a temporal range:
first_scan_id— the scan where this state was first observedlast_scan_id— the most recent scan where this state was confirmed
An item that remains unchanged across many scans has a single version row. A new version row is created only when observable state changes (metadata, hash, validation, or deletion status).
Schema Versioning
The schema is versioned (currently version 24) and automatically migrated on startup. fsPulse handles all upgrades transparently — no manual migration steps are needed.
Database Compaction
Over time, deletions and updates can leave unused space in the database file. The Settings page provides a Compact Database action that reclaims this space by running SQLite’s VACUUM command.
Exploring the Database
Because fsPulse uses SQLite, you can inspect the database using any compatible tool, such as:
- DB Browser for SQLite
- The
sqlite3command-line tool - SQLite integrations in many IDEs and database browsers
⚠️ Caution: Making manual changes to the database may affect fsPulse’s behavior or stability. Read-only access is recommended.
fsPulse manages all internal data access automatically. Most users will not need to interact with the database directly.
Development
fsPulse is under active development, with regular improvements being made to both its functionality and documentation.
Contribution Policy
At this time, fsPulse is not open for public contribution. This may change in the future as the project matures and its architecture stabilizes.
If you’re interested in the project, you’re encouraged to:
- Explore the source code on GitHub
- Open GitHub issues to report bugs or request features
- Follow the project for updates
Your interest and feedback are appreciated.
If contribution opportunities open in the future, setup instructions and contribution guidelines will be added to this page.
License
fsPulse is released under the MIT License.
You are free to use, modify, and distribute the software under the terms of this license.
Full License Text
MIT License
Copyright (c) 2025 gtunes-dev
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
See the full LICENSE file in the repository.
Acknowledgements
fsPulse relies on several open source Rust crates. We gratefully acknowledge the work of these maintainers, particularly for enabling file format validation.
File Format Validation
The following libraries enable fsPulse’s ability to detect corrupted files:
claxon— FLAC audio decoding and validationimage— Image format decoding for JPG, PNG, GIF, TIFF, BMPlopdf— PDF parsing and validation
See Validators for the complete list of supported file types.
Additional Dependencies
fsPulse wouldn’t be possible without the incredible open source ecosystem it’s built upon:
Web Interface:
- shadcn/ui — Beautiful, accessible component library
- Radix UI — Unstyled, accessible UI primitives
- Tailwind CSS — Utility-first CSS framework
- Lucide — Clean, consistent icon set
- React — UI framework
- Recharts — Composable charting library
- TanStack Virtual — Virtualized list rendering
Backend:
- rusqlite — SQLite database interface
- axum — Web framework
- tokio — Async runtime
- clap — Command-line argument parsing
- figment — Configuration management
- flexi_logger — Flexible logging
- pest — Parser generator (for query language)
The complete list of dependencies is available in the project’s Cargo.toml and package.json.
Thank you to all the open source maintainers whose work makes fsPulse possible.