

You’ll end up with better quality images this way compared to transferring them to Canon servers where they’ll likely be compressed or altered.
You’ll end up with better quality images this way compared to transferring them to Canon servers where they’ll likely be compressed or altered.
Don’t worry, once Canada enacts its firearm confiscation Ukraine will have a plethora of used .22 Lr rifles in the finest hot-pink camo and magazines pinned to 10-15 rounds.
Have you tried a user-agent switcher? Trick their website into thinking you’re using Safari on MacOS?
Reverse proxying was tricky for me, I started with Nginx Proxy Manager and it started out fine, was able to reverse proxy my services in the staging phase however, once I tried to get production SSL/TLS certificates it kept running into errors (this was a while ago I can’t remember exactly) so that pushed me to SWAG and swag worked great! Reverse proxying was straight forward, SSL/TLS certificates worked well however, overall it felt slow, so now I’m using Traefik and so far have no complaints.
It’s honestly whatever works for you and what you prefer having.
I honestly never tried Ventoy myself so I can’t really give you a proper answer to this however, after reading into it I see no reason why it wouldn’t work? So long as GParted can access the systems disks there shouldn’t be an issue.
Put GParted ISO on a thumb drive using Rufus or BalenaEtcher, in your BIOs change the boot order so that GParted boots first, boot into GParted an then readjust/delete your partition as you need be.
Pretty straightforward for the most part.
I agree, hence why I left the note at the bottom of that comment, yes it does encourage bad practices but, if all OP cares about is that it works then it should be fine.
In my other comment I instructed OP to move the volume to their users home directory so they don’t run into permission issues like this again.
Taking a look at your docker-compose.yml
I see this volume mount:
volumes:
- /volume1/SN/Docker/searxng-stack/searxng:/etc/searxng:rw
Whereas /volume1/SN/Docker/searxng-stack/searxng
is the directory on your system docker is attempting to use to store the files inside the container from /etc/searxng
.
Example of a volume mount that’ll likely work better for you;
volumes:
- /home/YourUser/docker/config/searxng:/etc/searxng:rw
The tilde (~) acts as your current users home directory not owned by root and where docker persistent volumes should be stored.(aka: /home/YourUser
)
Edit: I feel like I was wrong here, given that your run sudo
in docker compose up -d
the tilde will likely not work here and instead point to the /root
directory instead. I’ve updated the above to reflect the appropriate directory for your volume mount.
After making the change over to that directory and configuring SearXNG how you like re-create your docker container with sudo docker compose up -d —force-recreate
Apologies for the poor formatting, typing this on mobile.
Edit:
Note: if you want to expose the port do not add the 127.0.0.1
like how I have in my docker-compose.yml
.
Edit 2: Corrected some things…
have you checked the directory & file permissions with ls -la /Your/SearXNG/WorkingDir
?
The error in your log is telling you that the container does not have permission to that directory/file, you can essentially bypass this with sudo chmod 777 /Your/SearXNG/WorkingDir/*
and sudo chown 1000:1000 /Your/SearXNG/WorkingDir/*
However, if you’re looking for security best practices this is not advisable but if all you care about is that it works it should be fine.
Late to the party but I decided to pickup a 13th gen ASUS NUC with an i7 over a prebuilt NAS, bought a couple external hard-disk bays setup Proxmox running a headless Debian 12 VM and almost everything runs great however, mistake was using Debian 12 because the Linux kernel is pretty far out of date and does not support the CPU properly.
Does your laptop have 2 GPU’s?
NVIDIA Optimus sucks for Linux, I would suggest looking into EnvyControl and forcing your xorg & xrandr to use your NVIDIA GPU primarily and not the iGPU.
I’ve used Unbound for years but recently had to switch to Blocky for some weird reason.
Blocky doesn’t appear to be a Recursive DNS resolver? It seems to still rely on upstream providers whereas Unbound directly communicates with the root distributors for the domains you lookup.
The day I do the old fashioned sudo apt update && sudo apt upgrade
and everything suddenly breaks is when I know I’m on Debian 13.
My ASUS WRT router running Merlin firmware offers to host a WireGuard server, I simply use the WireGuard app, dump the config file in and hit connect.
Took a little configuration but eventually got it working how I want it.
Edit: Reason for the Merlin Firmware is because I can route my VPN server through my VPN provider, goes a little like this;
5G/LTE > WireGuard to my router > Router routes that connection to ProtonVPN
This gives me access to the resources in my home while also reaping the benefits of my VPN provider.
I don’t know how developed your school system is but, I would advise the principal into blocking the sites via DNS that way the computers won’t resolve them.
AdGuard, PiHole, OpenSense are free open source DNS resolvers however, chances are your school already manages its own DNS so I would obviously consult with them first.
all the Linux
Unafuckingcceptable.
This is why I chose an ASUS nuc + external bay-storage for my home networking needs, felt like synology NAS would be a limiting factor.
When i first researched Linux distros and learned that Ubuntu, Linux Mint, Kali Linux, etc were all derivatives of Debian I knew it was the distro I wanted to learn.
Granted the package manager does tend to fall behind and the Linux kernel is quite outdated on Debian 12 however, it works great for 99% of tasks (including gaming!).
deleted by creator
deleted by creator