

There are a lot of backend processes for those sites which need a server, so that wouldn’t work, but thank you regardless.
There are a lot of backend processes for those sites which need a server, so that wouldn’t work, but thank you regardless.
If you’re doing static sites, then traffic shouldn’t be a concern.
I host two sites that each get more than 2 million hits a month, and I run them from a $0.10 cent Scaleway server.
Cloudflare in front of the sites takes most of the load.
deleted by creator
As someone posted above, someone obtaining access to your encrypted data might lead to an issue in the future:
smartphone app
It’s a PWA, just install the site on your phone as a native app. I’ve been using it for about a year.
deleted by creator
It’s just Fedora CoreOS with some small quality-of-life packages added to the build.
There’s tons of documentation for CoreOS and it’s been around for more than a decade.
If you’re running a container workload, it can’t be beat in my opinion. All the security and configuration issues are handled for you, which is especially ideal for a home user who is generally not a security expert.
It’s just Fedora CoreOS with some QoL packages added at build time. Not niche at all. The very minor changes made are all transparent on GitHub.
Choose CoreOS if you prefer, it’s equally zero maintenance.
🤷 I’ve been running Aurora and uCore for over a year and have yet to do any maintenance.
You can roll back to the previous working build by simply restarting, it’s pretty much the easiest fix ever and still zero maintenance (since you didn’t have to reconfigure or troubleshoot anything, just restart).
They won’t apply unexpectedly, so you can reboot at a time that suits. Unless there’s a specific security risk there’s no need to apply them frequently. Total downtime is the length of a restart, which is also nice and easy.
It won’t fit every use-case, but if you’re looking for a zero-maintenance containerized-workload option, it can’t be beat.
It’s the kind of thing I host so that no matter what device I’m sitting in front of, I can easily pull it up. Hence a server is needed. I’m not talking about just my own laptop or phone, I mean any shared or borrowed device.
I find it so useful I pull it up almost every workday.
What do you mean by that? Podman compose is a drop-in replacement for Docker compose, and everything is identical other than needing to add :Z
to the end of your volume lines.
Here’s my Navidrome config. This is running on uCore version of CoreOS, with rootless Podman and SELinux. I made no configuration changes to Podman out-of-the-box, and this is the full compose file.
i have to remap the user namespace
Note: I have not done this. What are you running Podman on? Perhaps there is some config issue with the host, since you’re having issues with many containers?
To be fair, maybe just go with docker if it’s causing that much pain. But again, mine is working OOTB without making any changes to the Podman setup on ucore, and using the config below.
services:
navidrome:
image: deluan/navidrome:latest
container_name: navidrome
ports:
- "3015:4533"
restart: unless-stopped
environment:
# Optional: put your config options customization here. Examples:
volumes:
- ./data:/data:Z
- ./config.toml:/navidrome.toml:Z
You need to add :Z
to the end of your volume lines, or lowercase z
for shared volumes.
I’m running 50+ containers, probably most of the popular ones, and all working fine.
I run 50+ containers with rootless Podman compose (on CoreOS) and haven’t encountered any unsolvable issues so far.
I’ve never tried quadlets but haven’t found a need or any driving reason to do so.
I get 10+ hours on Aurora-DX + AMD laptop. I think AMD might be the part which makes the big difference.
No special config, just out of the box.
Move your stuff from Gandi to Netim.com
Gandi got acquired and has done some very weird shit with pricing.
So what does immutable mean?
The easiest explanation is: You can’t screw it up :)
That’s the reason I use it. It means that the system areas are read-only, and as a user you can’t “wreck” anything by mistake.
The thing Synology does which no competitor is yet doing is rock-solid stability.
I have a 10 year old Synology running as well as it did the day I bought it, and I’ve never needed to troubleshoot a single issue on it.
Until a competitor can match that I will still be buying Synology, with the increased drive price the cost I pay for that stability.
I think people underestimate the value of that.