I’m not familiar enough with cloudflare proxy stuff. I just have my DNS pointed at my router external IP (and luckily my ISP doesn’t reset my IP ever.) It sounds like CF has designed this intentionally as a profit center. Sorry couldn’t be more help
- 0 Posts
- 11 Comments
This isn’t a cloudflare limitation. It’s a TLS limitation. It was a conscious decision not to support multi-level wildcards. You won’t find a service that supports it. Most people get around this by just not using TLS certs like this. You can encode your multi-level name spacing in 1 level So instead of something like svc1.svcgroup.dev.domain.org You can do it like svcgroup-svc1.dev.domain.org
Never heard of a tool to get around this TLS limitation. There are tools that manage lots of certs (cert-manager in k8s comes to mind). If you had a more concrete example it might help people to suggest solutions.
iggy@lemmy.worldto Selfhosted@lemmy.world•ARM SBC Replacement for my k3s clusterEnglish4·3 months agoThe only Radxa I’d bother with is the Rock 5 and for the price, I’d probably just go with rpi5 (unless you like to tinker… a lot). That’s coming from someone that owns 3 Rock5’s. The new Orion board looks interesting, but if it’s like any other Radxa products it’ll be 2+ years before it gets decent software support.
iggy@lemmy.worldto Selfhosted@lemmy.world•What's up, selfhosters? It's selfhosting Sunday!English1·3 months agoThere’s a fine line between “auto-updates are bad” and “welp, the horribly outdated and security hole riddled CI tool or CMS is how they got in”. I tend to lean toward using something like renovate to queue up the updates and then approve them all at once. I’ve been seriously considering building out a staging and prod env for my homelab. I’m just not sure how to test stuff in staging to the point that I’d feel comfortable auto promoting to prod.
I have a couple Aoostar R7’s (4x in a hyper-converged ceph+cloud-hypervisor+k0s cluster, but that’s overkill for most). They have been rock solid. They also have an n100 version with less storage expansion if you don’t need it. My nodes probably idle at about 20w fully loaded with drives (2x nvme, 1x sata SSD, 1x sata HDD). Running ~15 containers and a VM or 2. You should be able to easily get 1 (plus memory and drives) for $1000. Throw proxmox and/or some NAS OS on it and you’re good to go.
iggy@lemmy.worldto Selfhosted@lemmy.world•How does hypixel have their website and minecraft server on their root domain? (I would like to do something similar)English1·8 months agoCaddy can do both. If you’re using a wildcard already, stick with it. In fact, I’d say it’s more prudent to use wildcards (with DNS challenges) than http challenges.Then you aren’t listing all of your domains in letsencrypt’s public database for everyone to see. Nobody needs to know you’ve got a site called bulwarksdirtyunderpants.bulwark.ninja
iggy@lemmy.worldto Selfhosted@lemmy.world•Using refurbished HDDs in my livingroom NASEnglish9·8 months agoGood write up. Thanks for the good lessons learned section.
Tmux is your friend for running stuff disconnected. And I agree with the other post about btrfs send/receive.
iggy@lemmy.worldto Selfhosted@lemmy.world•Notification when new app versions are releasedEnglish4·1 year agoArgus https://release-argus.io
They’ve been rock solid so far. Even through the initial sync from my old file server (pretty intensive network and disk usage for about 5 days straight). I’ve only been running them for about 3 months so far though, so time will tell. They are like most mini pc manufacturers with funny names though. I doubt I’ll ever get any sort of bios/uefi update
Internet:
- 1G fiber
Router:
- N100 with dual 2.5G nics
Lab:
- 3x N100 mini PCs as k8s control plane+ceph mon/mds/mgr
- 4x Aoostar R7 “NAS” systems (5700u/32G ram/20T rust/2T sata SSD/4T nvme) as ceph OSDs/k8s workers
Network:
- Hodge podge of switches I shouldn’t trust nearly as much as I do
- 3x 8 port 2.5G switches (1 with poe for APs)
- 1x 24 port 1G switch
- 2x omada APs
Software:
- All the standard stuff for media archival purposes
- Ceph for storage (using some manual tiering in cephfs)
- K8s for container orchestration (deployed via k0sctl)
- A handful of cloud-hypervisor VMs
- Most of the lab managed by some tooling I’ve written in go
- Alpine Linux for everything
All under 120w power usage
That’s a basic requirement for almost any company. If you’re into hard coding credentials just use wireguard directly.