What is Garage
Garage is an open source project built by a non profit association called ‘DeuxFleurs’. The ambition of Garage is to provide secure storage at low cost. Without needing thousand of disks to create a ZFS pool, or a heavy NAS that eat 600W.
Garage is designed to be easy solution to administrate. A lot of people can say ZFS is not that hard to master. But it require you to consider a lot of parameters,and a single missconfiuguration can lead to serious issues or even, data loss.
Garage is an efficiant piece of software, built to run on old hardware, with an unstable network and unreliable disks.
Why I need garage
Today my virtual machines, my VPS, my databases, my ZFS dataset and all my other valuable persistent data. Are only backup in one place: my NAS. If anything happens to that NAS, I loose everythings.
In my case Garage is a second backup layer, geo-replicated across multiple locations, for all my existing backups.
My goal for my Garage Cluster
My Garage cluster needs to meet the following requirement:
- Geo-replication: data is distributed across multiple physical sites
- Minimal monthly cost
- Resilience to power outages
- Fault tolerance: if a device fails, the cluster must keep running in degraded mode while I arrange a replacement
The environment
The actual cluster is made up of 3 ZimmaBlade devices that basicly have 2 old cores and 8GB of RAM, each hosted in a different location, each equipped with a 3TB hard drive. The drives will be upgraded over time.

Configuration
Device management is based on NixOS, which simplifies the installation process thanks to nixos-anywhere. For orchestration, I use Colmena, which lets me administer the entire cluster with a single tool.
Device Installation
Installation is done via nixos-anywhere, a NixOS script that automatically formats the drive and installs the operating system.
Each machine requires a NixosConfiguration: a declarative file that defines everything
that differs from the base NixOS configuration — the Garage config, networking, fstab, etc.
Here is a quick preview of what a NixOS configuration looks like:
{
config,
common,
lib,
...
}:
{
system.stateVersion = "25.05";
# Change by user name you like
users.users.nixos = {
extraGroups = [
"wheel"
];
isNormalUser = true;
createHome = true;
description = "A nixos User";
openssh.authorizedKeys.keys = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA0dL9hp+DRU33S3Nz1aYakFqUHNrIbjIhyS9Xj0wbpZ ridergogo00@gmail.com"
];
};
nix.settings.trusted-users = [
"root"
"@wheel"
];
virtualisation.virtualbox.guest.enable = lib.mkForce false;
networking.interfaces.enp2s0.useDHCP = true;
imports = [
./boot.nix
./netbird.nix
./boot.nix
];
}
Every node needs to be reachable by all other nodes in the cluster. For this I use Netbird, a mesh VPN that creates a private network between all nodes:
services.netbird.enable = true;
The private key and URL setup for Netbird is done imperatively, as NixOS does not yet have a native option for this.
Creation of the garage configuration
The Garage configuration is fully managed by Nix.
{
config,
pkgs,
...
}:
{
services.garage = {
package = pkgs.garage_2;
enable = true;
settings = {
metadata_dir = "/var/lib/garage/meta";
data_dir = [
{
path = "/mnt/garage-disk1";
capacity = "2.7T";
}
];
db_engine = "lmdb";
metadata_auto_snapshot_interval = "6h";
replication_factor = 3;
compression_level = 2;
s3_api = {
s3_region = "garage";
api_bind_addr = "0.0.0.0:3900";
root_domain = ".s3.garage";
};
s3_web = {
bind_addr = "0.0.0.0:3902";
root_domain = ".web.garage";
index = "index.html";
};
rpc_bind_addr = "0.0.0.0:3901";
bootstrap_peers = [
"be1c8e7@100.97.XX.XX:3901"
"e19b442f@100.97.XX.XX:3901"
"8ddb4dff@100.97.XX.XX:3901"
];
};
};
networking.firewall.allowedTCPPorts = [
3900
3901
3902
];
systemd.services.garage.serviceConfig = {
DynamicUser = false;
User = "garage";
Group = "garage";
ReadWritePaths = [
"/mnt/garage-disk1"
];
};
users.users.garage = {
isSystemUser = true;
group = "garage";
};
users.groups.garage = { };
}
Creation of the Garage cluster
First, check if every nodes are discovered with the command
garage status
Once everything is detected, we can create the layout of the cluster.
What is the layout ?
The layout allows Garage to understand where each node is geographically located. For example, with 8 nodes spread across three cities:
| Device Name | Region | Capacity |
|---|---|---|
| Paris-1 | Paris1 | 2T |
| Paris-2 | Paris1 | 2T |
| Paris-2 | Paris1 | 2T |
| NY-1 | NY1 | 3T |
| NY-2 | NY1 | 2T |
| NY-3 | NY1 | 1T |
| London-1 | London | 3T |
| London-2 | London | 3T |
Garage is built to run on heterogeneous hardware, no need for a consistent cluster. You just need at least 3 nodes to have a replication factor of 2.
Assign the layout
In my case, I have 3 devices in 3 different locations, so I create 3 zones:
garage layout assign xxx -z m-ver-fr -c 2.9T
I do this command for the 2 others node with their respective zone names.
Then I verify if the layout:
garage layout show
And when this is ok I apply the layout with this command:
garage layout apply --version 1
A final garage status confirms the layout has been correctly applied

Monitor Garage
The monitoring in garage is simple. The prometheus metrics are already exposed. And we just need to create a job in Prometheus to scrape the metrics
And there is a grafana dashboard for garage

Usages
The cluster is up and running. Here are the different ways I use it.
Creating an S3 Bucket
To create a S3 Bucket
garage bucket create test-bucket
You can get info on your bucket with this command:
garage bucket info test-bucket
After that we need to create an access key to this bucket or other bucket
Creating an access key:
garage key create my-key
Get information of this key:
garage key info my-key
When we have a bucket and a key I need to specify the permission on this bucket.
In garage they are three levels of permission
- Write
- Read
- Owner
For the example I will give the three levels of permission to this key:
garage bucket allow --read --write ---owner test-bucket --key my-key
Now that we have our bucket created with the good permission associated.
We can see in garage our bucket with the previous action we done.
To visualize that we can just do a garage bucket info test-bucket

TrueNAS Backup
Context
My TrueNAS NAS is my main server. It hosts my home directories, VM backups, photos, administrative files, and other sensitive data I can’t afford to lose. Until now, TrueNAS itself was not backed up anywhere else.
Garage becomes the second storage location for my TrueNAS data. In the future, I also plan to deploy a second TrueNAS as a mirror across two separate ZFS pools, combined with Garage’s geo-replication, this will be a solid backup architecture.
Setup
TrueNAS has a built-in feature to replicate a dataset to an S3 target. I created a dedicated bucket and access key in Garage, then configured the credentials in TrueNAS.

After that I need to create a Data replication task. In this task I select the folder I want to backup and the methods of backup. And I select the credentials for garage we create precedently

I choose a SYNC transfer mode to avoid the duplicates. Harage is meant to be a mirror of TrueNAS, not a versinned archive.
Backup XCP NG
On XCP-NG, Xen Orchestra handles backups. It supports multiple backends including NFS and S3. In my setup, I configure one backup to my NAS and another to Garage, giving me two independent destinations.
Static Website Hosting
In a previous post I explained that this blog was hosted with IPFS. I decide to change this to Garage because is more easy for me to update.
The Garage Gateway
A Garage gateway is a special node that does not store data, but exposes either a web server or the S3 API. It acts a bit like a CDN: it queries the Garage network to serve content.
My VPS (used as a reverse proxy) runs mainly on Docker,
so I deployed a Garage gateway with this docker-compose:
There is the compose for garage
garage-gateway:
image: "dxflrs/garage:v2.2.0"
restart: unless-stopped
volumes:
- ./garage/garage.toml:/etc/garage.toml
- garage:/var/lib/garage/meta
- garage:/var/lib/garage/data
network_mode: host
and the config file is the same of the one used in my nixos configuration above.
After that I need to create a new layout with sepicfying
garage layout assign --gateway --tag ovh-gateway -z ovh-eu XXX
If the layout is correctly apply the garage status should be like that

Publishing the website
Now I need to create a bucket for the website and I need to add an alias that match with the DNS name of my portfolio to be correctly redirected.
garage bucket create portfolio
garage bucket alias portfolio matthieudaniel-thomas.fr
And now we can see the garage bucket have multiple aliases

In garage the bucket need to be allowed to serve web content this command enable to do that
garage bucket website --allow portfolio
Reverse Proxy configuration (Caddy):
*.web.garage.ridercorp.org:443, matthieudaniel-thomas.fr:443 {
reverse_proxy http://37.XX.XX.XX:3902
tls {
dns cloudflare {env.CF_API_TOKEN}
}
}
On the DNS side, I just point the domain to the gateway. To reach a bucket, the domain follows the pattern <bucket-name>.web.garage.ridercorp.org.
And now everything is good you read this page from garage ;)
Backup VPS
My VPS instances mostly run Docker containers, so I want to back up their volumes. For this I use Restic, which natively supports S3 as a backend.
Setup
Firstly I need to create a bucket and a key in Garage for my VPS.
garage bucket create vps-matthieu-backup
garage key create opal
garage bucket allow --key opal --read --write vps-matthieu-backup
Installing Restic:
dnf install restic
Dry run
Firstly I want to test in my user session for the creation of the repostory and after that I will put everything in my systemd timer.
Initializer repository:
restic -r s3:https://s3.garage.ridercorp.org/vps-matthieu-backup init
The repository has been created

To launch a backup this command can be used
restic -r s3:https://s3.garage.ridercorp.org/vps-matthieu-backup backup /opt/synapse /srv/gitlab
Systemd timer
To automate my backup I will use a systemd timer that will run every week.
For a systemd timer I firstly need a script
so it will be the command I used to launch my backup.
And we put this script here /usr/local/bin/restic-backup.sh.
After that we need a timer that will trigger our systemd service.
restic-backup.timer
[Unit]
Description=Weekly exec of backup
[Timer]
OnCalendar=weekly
Persistent=true
[Install]
WantedBy=timers.target
The service:
restic-backup.service
[Unit]
Description=Restic backup
After=network.target
[Service]
Type=oneshot
ExecStart=/usr/local/bin/restic-backup.sh
Environment="AWS_ACCESS_KEY_ID=xxxxxxxxxxxxxxxxxxxxxxxxxx"
Environment="AWS_SECRET_ACCESS_KEY=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
Environment="RESTIC_PASSWORD=homing-tae-typhoid-slurps-reaches"
User=root
And if we run manually the service we can view the backup

Conclusion
Garage is a remarkably simple solution to administrate, yet surprisingly powerful. With just a few commands and a handful of NixOS configuration files, I now have a geo-replicated, fault-tolerant storage infrastructure running on modest, heterogeneous hardware.
But beyond the technical aspects, what matters most to me about this setup is data sovereignty.
In a landscape dominated by AWS S3, Google Cloud Storage, and Azure Blob Storage, it’s tempting to hand off storage to a third party, often foreign, often opaque about their practices, and always free to change their pricing or terms of service at any time. With Garage, I know exactly where my data lives: on three physical machines, in three locations I control, with no dependency on any cloud provider.
This is also a form of personal digital resilience. No credit card required, no surprise bills, no account suspension. My infrastructure runs on hardware I own, with free and open-source software that is fully auditable, maintained by a non-profit association. If DeuxFleurs disappeared tomorrow, my cluster would keep running indefinitely.
Finally, self-hosting with tools like Garage, NixOS, and Colmena opens an interesting path to reclaiming control over your digital footprint, even without significant resources. The entire infrastructure costs me little more than the electricity to run three small machines. It’s a reminder that the cloud is not an inevitability, it a choice among many, and sometimes not the best one.