Architecture of the project
We are going to run Ente photo server (just the photos API) on a NixOS host.
- Since security is critical, and even though Ente has proved that they take security very seriously, we will isolate everything in a NixOS microvm
- Ente originally used MinIO in its Docker setup, but since it has been discontinued, we have to move to other alternatives. I chose Garage, which is mentioned in some Ente documentation too.
- The setup below groups Ente + Garage into the same microvm. Your choice to decide if you want to decouple them, it should be trivial to do so thanks to NixOS.
Warning: Ente takes quite a lot of resources to build, consider increasing your swap if build fails. When building the microvm alone (imperatively), i had to increase my swap to 16Gb (my machine only has 2Gb of physical RAM…)
Setting up the microvm - General considerations
- I recommend creating a basic user to get a shell inside the VM
- Enable SSH for now, to access the VM’s shell
- You can disable the VM’s firewall since all traffic will come from the host already
- For networking, I chose to forward the VM’s ports directly on the host, for simplicity. We only need the following ports:
- 3900 for the s3 api
- (optional): 3903 for garage API, I disabled it
- 8080 for ente’s api
- the VM’s SSH port (22 -> 2222 for example)
Start the MicroVM, and SSH as the test user:
1
ssh test@127.0.0.1 -p 2222
Garage
Declaring Garage in the microvm
By default, Garage runs via systemd with DynamicUser set. This caused me a bit of a problem for access rights, because i could not find a user Garage is running as, to give it the correct permissions. Let’s instead run it as a system user called garage, group garage, and set the keys’ permissions to this user (see the section on Secrets management below).
Another issue I faced was storing Garage’s data permanently. Of course, we need to make a bind, which will be in /var/lib/garage, and we need to set the appropriate permissions, so that the folder gets created by systemd:
On the VM’s definition:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
{
# Other shares here
microvm.shares = [{
proto = "virtiofs";
tag = "garage-data";
source = "/ente-data/garage";
mountPoint = "/var/lib/garage";
}];
systemd.tmpfiles.rules = [
# Other rules here
"d /var/lib/garage 0750 garage garage -"
"d /var/lib/garage/meta 0750 garage garage -"
"d /var/lib/garage/data 0750 garage garage -"
];
}
The full definition of the MicroVM can be found on my GitHub, but it has been stripped of all debug and installation-related options.
Starting Garage
First, check that garage is running properly
1
2
3
4
[test@microvm-ente:~]$ sudo garage status
==== HEALTHY NODES ====
ID Hostname Address Tags Zone Capacity DataAvail Version
1d9d948d296d03ec microvm-ente 127.0.0.1:3901 NO ROLE ASSIGNED cargo:2.2.0
If not, several troubleshooting steps:
- we are running
garageas a user calledgarage:garage - we are mounting the permanent share at
/var/lib/garage - if the mount has never hold
garage’s data before, we may need to initialize it by runninggarage server - and ensuring proper ownership with
sudo chown -R garage:garage /var/lib/garage/
Creating a node
In our setup, we will only have only node, which means no data replication. A good thing to do later on, is to add more nodes!
Follow the steps in the documentation to create and apply a layout for your node according to your preferences.
On my setup, I end up with this 32G node:
1
2
3
4
5
6
7
8
[test@microvm-ente:~]$ sudo garage layout show
==== CURRENT CLUSTER LAYOUT ====
ID Tags Zone Capacity Usable capacity
79237492367e2c6e [] nixos-asustor 32.0 GB 32.0 GB (100.0%)
Zone redundancy: maximum
Current cluster layout version: 1
Creating a bucket
Next, create a bucket as per the documentation. I will call mine ente-bucket.
1
sudo garage bucket create ente-bucket
Applying CORS config
According to the documentation, we need to “setup some CORS rules to allow the Ente frontend to access the bucket”.
The example uses AWS CLI, but lacks detailed instructions.
First, create an access key for AWS, give it the appropriate permissions as before:
Warning: The so-called secret should stay confidential! Do not share the secret of keys used in production!
1
2
3
4
5
6
7
8
9
10
11
12
13
14
[test@microvm-ente:~]$ sudo garage key create cors-key
==== ACCESS KEY INFORMATION ====
Key ID: GK81096bee98bb440a48c3b040
Key name: cors-key
Secret key: (redacted)
Created: 2026-03-20 19:02:04.470 +00:00
Validity: valid
Expiration: never
Can create buckets: false
==== BUCKETS FOR THIS KEY ====
Permissions ID Global aliases Local aliases
Now permissions:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[test@microvm-ente:~]$ sudo garage bucket allow --key cors-key --owner --read --write ente-bucket
==== BUCKET INFORMATION ====
Bucket: dcd4d66b5e2ec3e99f813a66c969f393f7c162c0f9c14d1039e5769319b4dbfe
Created: 2026-03-18 21:59:04.878 +00:00
Size: 1.1 GiB (1.2 GB)
Objects: 1205
Website access: false
Global alias: ente-bucket
==== KEYS FOR THIS BUCKET ====
Permissions Access key Local aliases
RWO GK81096bee98bb440a48c3b040 cors-key
Now configure AWS CLI to use these credentials:
1
2
3
4
5
[test@microvm-ente:~]$ aws configure --profile garage
AWS Access Key ID [None]: GK81096bee98bb440a48c3b040
AWS Secret Access Key [None]: (redacted)
Default region name [None]: garage
Default output format [None]:
You can check that the profile is saved by re-running the command, and pressing Enter at each prompt:
1
2
3
4
5
[test@microvm-ente:~]$ aws configure --profile garage
AWS Access Key ID [****************b040]:
AWS Secret Access Key [****************0105]:
Default region name [garage]:
Default output format [None]:
Now we can run the tutorial’s commands:
1
2
3
4
5
6
[test@microvm-ente:~]$ export CORS='{"CORSRules":[{"AllowedHeaders":["*"],"AllowedMethods":["GET", "PUT", "POST", "DELETE"],"AllowedOrigins":["*"], "ExposeHeaders":["ETag"]}]}'
[test@microvm-ente:~]$ aws s3api put-bucket-cors \
--bucket ente-bucket \
--cors-configuration "$CORS" \
--endpoint-url http://127.0.0.1:3900 \
--profile garage
Final checks
List all buckets via AWS CLI:
1
2
3
4
[test@microvm-ente:~]$ aws s3 ls \
--endpoint-url http://127.0.0.1:3900 \
--profile garage
2026-03-18 21:59:04 ente-bucket
List the bucket’s content:
1
2
3
4
5
6
[test@microvm-ente:~]$ aws s3 ls s3://ente-bucket \
--endpoint-url http://127.0.0.1:3900 \
--profile garage
PRE .minio.sys/
PRE 1580559962386438/
PRE 1580559962386439/
View CORS rules:
1
2
3
4
5
6
7
8
9
10
11
[test@microvm-ente:~]$ aws s3api get-bucket-cors \
--bucket ente-bucket \
--endpoint-url http://127.0.0.1:3900 \
--profile garage
{
"CORSRules": [
{
"AllowedHeaders": [
"*"
],
[...]
We can now revoke this access key:
1
2
[test@microvm-ente:~]$ sudo garage key delete cors-key --yes
Access key GK81096bee98bb440a48c3b040 has been deleted.
(Optional) Restoring previous instance’s data
If you previously exported data from another instance of ente and with to restore it, you can do it now, and restore the data itself (the files in the s3 bucket). Restoring the database will be covered a bit later.
If you still have your key for AWS, you can reuse it, otherwise create a new one. Make sure you can still access the s3 bucket via AWS CLI as we demonstrated above.
Restoring is simple:
1
2
3
4
[test@microvm-ente:~]$ aws s3 cp /minio-bck/b2-eu-cen/ s3://ente-bucket/ \
--recursive \
--endpoint-url http://127.0.0.1:3900 \
--profile garage
The original bucket name needs to be adapted: here, my backup comes from a Docker instance of Ente with minio, which created a bucket called b2-eu-cen.
Creating an access key for ente
Now we need to create a permanent access key for Ente to access the bucket.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
[test@microvm-ente:~]$ sudo garage key create ente-key
==== ACCESS KEY INFORMATION ====
Key ID: GKaf9eb364792283cdc1b68177
Key name: ente-key
Secret key: (redacted)
Created: 2026-03-19 14:04:14.241 +00:00
Validity: valid
Expiration: never
Can create buckets: false
==== BUCKETS FOR THIS KEY ====
Permissions ID Global aliases Local aliases
And give it the appropriate permissions:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[test@microvm-ente:~]$ sudo garage bucket allow --read --write --owner ente-bucket --key ente-key
==== BUCKET INFORMATION ====
Bucket: dcd4d66b5e2ec3e99f813a66c969f393f7c162c0f9c14d1039e5769319b4dbfe
Created: 2026-03-18 21:59:04.878 +00:00
Size: 1.1 GiB (1.2 GB)
Objects: 1205
Website access: false
Global alias: ente-bucket
==== KEYS FOR THIS BUCKET ====
Permissions Access key Local aliases
RWO GKaf9eb364792283cdc1b68177 ente-key
Plug this key into Ente’s NixOS config, using a secrets management tool of your preference.
Postgresql
Next, check that postgresql is running properly in the VM with with:
1
[test@microvm-ente:~]$ sudo systemctl status postgresql
Once all the setup is done (or you restored your backup of the previous instance), you can login as user pguser and browse the database ente_db:
1
2
3
4
5
6
[test@microvm-ente:~]$ psql -h 127.0.0.1 -U pguser -d ente_db
Password for user pguser:
psql (15.17)
Type "help" for help.
ente_db=>
(Optional) Restoring previous instance’s database
If you have done a backup, you can easily restore it:
1
psql -h 127.0.0.1 -U pguser ente_db < ente_db_backup.sql
If you restored a backup, then your table most probably already contains users, so you can view them:
1
2
3
4
5
6
ente_db=> SELECT user_id FROM users;
user_id
------------------
1580559962386438
1580559962386439
(2 rows)
Here, I have 2 users ready, note that they correspond to the folders restored in the s3 backup earlier, so that’s a good sign.
Reverse proxy
The host needs to serve a reverse proxy to Ente API and to Garage’s s3 bucket.
We are going to use Caddy for simplicity and automatic certificate management.
Here is an example that you can include in your host’s configuration.nix:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
{
networking.firewall.allowedTCPPorts = [ 80 443 ];
services.caddy = {
enable = true;
virtualHosts = {
"s3.example.tld" = {
extraConfig = ''
reverse_proxy http://127.0.0.1:3900 {
health_uri /health
health_port 3903
}
'';
};
"api.example.tld" = {
extraConfig = ''
reverse_proxy 127.0.0.1:8080
'';
};
};
};
security.acme = {
acceptTerms = true;
defaults.email = "youremail@here.com";
};
}
Secrets management
There are quite a few secrets that need to be managed in this project. I am using agenix for this purpose.
- The
postgrescredentials (so that Ente can access the DB) - The Garage key+secret (so that Ente can access the s3 bucket)
- Ente itself also requires:
- API Key encryption
- API Key Hash
- API JWT Secret
In the end, I ended up with the following systemd.tmpfiles.rules for all these secrets inside the microvm:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
systemd.tmpfiles.rules = [
"f /run/secrets/postgres-pguser-password 0640 root ente"
"f /run/secrets/ente-garage-key 0640 root ente"
"f /run/secrets/ente-garage-secret 0640 root ente"
"f /run/secrets/api-key-encryption 0640 root ente"
"f /run/secrets/api-key-hash 0640 root ente"
"f /run/secrets/api-jwt-secret 0640 root ente"
"f /run/secrets/garage-rpc-secret 0600 garage garage"
"f /run/secrets/garage-admin-token 0600 garage garage"
"f /run/secrets/garage-metrics-token 0600 garage garage"
"d /var/lib/garage 0750 garage garage -"
"d /var/lib/garage/meta 0750 garage garage -"
"d /var/lib/garage/data 0750 garage garage -"
];
The last three entries have been discussed in the Garage setup and are not related to secrets management.
Debugging
Check ente is working
With this, you can test:
- if ente accesses the postgres database correctly
- that your port forwarding from the VM to the host is working
- that your reverse proxy to Ente’s API is working fine
Install ente-cli inside the VM, and try adding your account:
1
2
3
4
5
6
[test@microvm-ente:~]$ mkdir ~/.ente export
[test@microvm-ente:~]$ cat > ~/.ente/config.yaml << EOF
> endpoint:
api: http://localhost:8080
> EOF
1
2
3
4
5
6
7
8
9
[test@microvm-ente:~]$ ENTE_CLI_SECRETS_PATH=./secrets.txt ente account add
Enter app type (default: photos):
Use default app type: photos
Enter export directory: export
Enter email address: youremail@here.com
Enter password:
Please wait authenticating...
Account added successfully
run `ente export` to initiate export of your account data
If you see Enter password, then it’s working fine.
If you see Enter OTP, then the CLI is trying to connect to api.ente.io, and not your local instance.
To connect to a remote instance behind your reverse proxy, use the HTTPS endpoint of your API.
If you try exporting now, you need to make sure Ente can access the s3 bucket, which is a different problem to debug.
Final checks - Security
You want to check that:
- all keys that you created in Garage to access your bucket are revoked (except the one that Ente uses in production)
- all your keys are stored in secrets, and permissions are as restrictive as possble
- only ports 8080 and 3900 are forwarded to your host
- Caddy has enabled TLS correctly
- the test user and OpenSSH are disabled
Sources
- Deploying Ente without Docker: ente documentation
- microvm: official documentation
- Self-hosted s3 configuration: ente documentation
- Garage, getting started: Garage documentation
- Ente with Garage: Garage documentation
- Ente CLI: ente documentation
- Ente CLI keyring: ente documentation