Advanced Use Cases
ZeroFS enables powerful architectures that were previously complex or expensive to implement. From geo-distributed ZFS pools to tiered storage systems, this guide explores advanced use cases that showcase ZeroFS's full potential for enterprise and innovative deployments.
These advanced architectures require careful planning and testing. Start with simpler configurations and gradually add complexity as you gain experience with ZeroFS.
Geo-Distributed Storage
Create globally distributed storage systems using ZeroFS instances across multiple regions:
# Terminal 1 - US East
ZEROFS_ENCRYPTION_PASSWORD='shared-key' \
AWS_DEFAULT_REGION=us-east-1 \
ZEROFS_NBD_PORTS='10809' \
ZEROFS_NBD_DEVICE_SIZES_GB='100' \
zerofs s3://my-bucket/us-east-db
# Terminal 2 - EU West
ZEROFS_ENCRYPTION_PASSWORD='shared-key' \
AWS_DEFAULT_REGION=eu-west-1 \
ZEROFS_NBD_PORTS='10810' \
ZEROFS_NBD_DEVICE_SIZES_GB='100' \
zerofs s3://my-bucket/eu-west-db
# Terminal 3 - Asia Pacific
ZEROFS_ENCRYPTION_PASSWORD='shared-key' \
AWS_DEFAULT_REGION=ap-southeast-1 \
ZEROFS_NBD_PORTS='10811' \
ZEROFS_NBD_DEVICE_SIZES_GB='100' \
zerofs s3://my-bucket/asia-db
Benefits of Geo-Distribution
- Disaster Recovery: Data survives regional outages
- Geographic Redundancy: Automatic replication across continents
- Read Performance: Local reads from nearby regions
- Compliance: Data residency requirements met
Tiered Storage Architecture
Combine ZeroFS with ZFS L2ARC for automatic storage tiering:
# Create S3-backed main pool
zpool create datapool /dev/nbd0 /dev/nbd1 /dev/nbd2
# Add local NVMe as L2ARC cache
zpool add datapool cache /dev/nvme0n1
# Add local SSD as SLOG for write performance
zpool add datapool log /dev/ssd0
# Monitor cache effectiveness
zpool iostat -v datapool 5
Storage Hierarchy
graph TD
A[Application] --> B[ZFS ARC - RAM]
B --> C[ZFS L2ARC - NVMe]
C --> D[ZeroFS Memory Cache]
D --> E[SlateDB Disk Cache - SSD]
E --> F[S3 Storage]
B -.->|Hit Rate: 80%| A
C -.->|Hit Rate: 15%| A
D -.->|Hit Rate: 4%| A
E -.->|Hit Rate: 0.9%| A
F -.->|Hit Rate: 0.1%| A
Database Architectures
PostgreSQL with Streaming Replication
# Primary PostgreSQL on ZeroFS
ZEROFS_NBD_PORTS='10809' \
ZEROFS_NBD_DEVICE_SIZES_GB='500' \
zerofs s3://db-bucket/postgres-primary
# Mount and initialize
nbd-client 127.0.0.1 10809 /dev/nbd0
mkfs.xfs /dev/nbd0
mount /dev/nbd0 /var/lib/postgresql
# Configure PostgreSQL for replication
cat >> /etc/postgresql/16/main/postgresql.conf <<EOF
wal_level = replica
max_wal_senders = 3
wal_keep_size = 1GB
EOF
MySQL/MariaDB Galera Cluster
# Node 1
ZEROFS_NBD_PORTS='10809' \
ZEROFS_NBD_DEVICE_SIZES_GB='200' \
zerofs s3://db-bucket/galera-node1
# Node 2
ZEROFS_NBD_PORTS='10810' \
ZEROFS_NBD_DEVICE_SIZES_GB='200' \
zerofs s3://db-bucket/galera-node2
# Node 3
ZEROFS_NBD_PORTS='10811' \
ZEROFS_NBD_DEVICE_SIZES_GB='200' \
zerofs s3://db-bucket/galera-node3
# Each node has its own S3-backed storage
# Galera handles synchronous replication
CAP Theorem and Distributed Systems
ZeroFS enables interesting CAP theorem trade-offs through its architecture:
Shared Storage Architecture
Client A ←→ Client B ←→ Client C
↓ ↓ ↓
├───────────┼───────────┤
↓ ↓ ↓
PG Node 1 PG Node 2 PG Node 3
↓ ↓ ↓
└───────────┼───────────┘
↓
Shared ZFS Pool
↓
Global ZeroFS
Key points:
- Consistency: Only one writer at a time (enforced by ZeroFS)
- Availability: Any accessible node can be promoted
- Partition Tolerance: System continues despite network partitions
How It Works
- Normal Operation: Primary has exclusive write access
- Partition Detected: Client orchestrates failover
- Fencing: Old primary cannot write (ZeroFS enforces)
- New Primary: Client promotes accessible standby
Container and Kubernetes Integration
Persistent Volumes for Kubernetes
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: zerofs-nfs
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: zerofs-pv
spec:
capacity:
storage: 1Ti
accessModes:
- ReadWriteMany
nfs:
server: zerofs-service.storage.svc.cluster.local
path: "/"
storageClassName: zerofs-nfs
Docker Compose for Development
version: '3.8'
services:
zerofs:
image: ghcr.io/barre/zerofs:latest
environment:
ZEROFS_ENCRYPTION_PASSWORD: ${ZEROFS_PASSWORD}
SLATEDB_CACHE_DIR: /cache
SLATEDB_CACHE_SIZE_GB: 50
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
ZEROFS_NBD_PORTS: "10809,10810"
ZEROFS_NBD_DEVICE_SIZES_GB: "10,20"
volumes:
- cache:/cache
ports:
- "2049:2049"
- "10809-10810:10809-10810"
command: s3://dev-bucket/docker-data
postgres:
image: postgres:16
depends_on:
- zerofs
volumes:
- type: volume
source: postgres-data
target: /var/lib/postgresql/data
volume:
driver: local
driver_opts:
type: nfs
o: addr=zerofs,vers=3,tcp,port=2049
device: ":/postgres"
volumes:
cache:
postgres-data:
Self-Hosting ZeroFS
ZeroFS can even host itself, creating a bootstrapping environment:
# Start initial ZeroFS instance
ZEROFS_ENCRYPTION_PASSWORD='bootstrap' \
zerofs s3://bucket/bootstrap
# Mount it
mount -t nfs -o vers=3 127.0.0.1:/ /mnt/bootstrap
# Build ZeroFS on ZeroFS
cd /mnt/bootstrap
git clone https://github.com/Barre/ZeroFS
cd ZeroFS
cargo build --release
# Now you're compiling on S3-backed storage!
Advanced NBD Configuration
# High-performance NBD setup
for i in {0..7}; do
echo 2048 > /sys/block/nbd$i/queue/read_ahead_kb
echo 256 > /sys/block/nbd$i/queue/nr_requests
echo deadline > /sys/block/nbd$i/queue/scheduler
done
# RAID0 for performance (RAID1/5/6 for redundancy)
mdadm --create /dev/md0 --level=0 --raid-devices=8 \
/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 \
/dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7
# Format with optimal settings
mkfs.xfs -f -d su=128k,sw=8 /dev/md0
Future Possibilities
ZeroFS opens doors to new architectures:
- Serverless Databases: Spin up database instances on-demand
- Edge Computing: Run workloads close to data with local caches
- Hybrid Cloud: Seamlessly move data between clouds
- Time-Travel Storage: Snapshot and restore using SlateDB checkpoints
- Multi-Protocol Access: Same data via NFS, NBD, and future protocols
These advanced use cases demonstrate ZeroFS's flexibility. Start simple and gradually adopt more complex architectures as your needs grow.