

Even if Illinois was feasible, I don’t think I’d want that. I’d rather fix the system. And quit dancing around the issue of Puerto Rico statehood.
Even if Illinois was feasible, I don’t think I’d want that. I’d rather fix the system. And quit dancing around the issue of Puerto Rico statehood.
I’ve been running mail servers for about thirty years; my personal ones and production for 100K+ users.
The personal one is a pain for the reasons you mentioned. I use sendmail instead of postfix, but I was able to use some rules to push certain messages through other relays.
I signed up for Amazon SES and have so far stayed in their free tier. Mail coming from one of my addresses always goes through SES, and mail from any address to certain domains (aol.com, gmail.com, etc.) go through SES as well.
It allows me to ensure delivery for my important mails, but leave things up to chance for less important ones.
It’s the best solution I’ve been able to come up with for a really annoying situation. Big Tech ruined it all.
Also, note that doesn’t increase the stripe size for old data; it’s just for future writes.
But you could copy the old data to a new location and it would take advantage of the new stripe size.
It used to be that you couldn’t grow the pool, so you needed all of your drives up-front.
Now you can start with four drives and slowly grow over time to whatever your target goal is. It’s much more friendly for home labs/tight budgets.
Finally! #15022, it’s been a long time coming…
I like LibreCAD, but it’s a little too simple sometimes. I miss the power of AutoCAD, but I don’t miss its price.
Three things I want are
It took a couple of days to get used to and probably a week of use before I was 100% comfortable, but I find that it meets most of my needs now.
I use LibreCAD for architecture work and will take a look at FreeCAD.
Has anyone else tried both for architectural work? How did they compare for you?
It wasn’t always followed on Reddit, but downvoting there was supposed to be for comments that don’t contribute to the conversation.
Here the guidance is looser – the docs don’t address comments, but do say to “upvote posts that you like.”
I’ve tried contributing to some conversations and sometimes present a different viewpoint in the interest of thought exchange, but this often results in massive downvotes because people disagree. I’m not going to waste my energy contributing to a community that ends up burying my posts because we have different opinions.
That’s true on Reddit to, so I’m kind of being tangential to the original question. I guess what I’m saying is that some people might feel like I do and won’t engage in any community, be it Reddit or Lemmy, if it’s just going to be an echo chamber.
I’ve been doing this for 30+ years and it seems like the push lately has been towards oversimplification on the user side, but at the cost of resources and hidden complexity on the backend.
As an Assembly Language programmer I’m used to programming with consideration towards resource consumption. Did using that extra register just cause a couple of extra PUSH and POP commands in the loop? What’s the overhead on that?
But now some people just throw in a JavaScript framework for a single feature and don’t even worry about how it works or the overhead as long as the frontend looks right.
The same is true with computing. We’re abstracting containers inside of VMs on top of base operating systems which is adding so much more resource utilization to the mix (what’s the carbon footprint on that?) with an extremely complex but hidden backend. Everything’s great until you have to figure out why you’re suddenly losing packets that pass through a virtualized router to linuxbridge or OVS to a Kubernetes pod inside a virtual machine. And if one of those processes fails along the way, BOOM! it’s all gone. But that’s OK; we’ll just tear it down and rebuild it.
I get it. I understand the draw, and I see the benefits. IaC is awesome, and the speed with which things can be done is amazing. My concern is that I’ve seen a lot of people using these things who don’t know what’s going on under the hood, so they often make assumptions or mistakes that lead to surprises later.
I’m not sure what the answer is other than to understand what you’re doing at every step of the way, and always try to choose the simplest route (but future-proofed).
I set up LinkWarden about a month ago for the first time and have been enjoying it. Thank you!
I do have some feature requests – is GitHub the best place to submit those?
I’m a big fan of netdata; it’s part of my standard deployment. I put in some custom configs depending on what services are running on what servers. If there’s an issue it sends me an email and posts into a slack channel.
Next step is an influxdb backend to keep more history.
I also use monit to restart certain services in certain situations.
I wish it was database agnostic. And I’m slightly concerned about the version three rewrite.
It does look awesome, and I’ll revisit it to see where things are in six months.
Yup! Since 1993… Started Linux on my desktop and haven’t looked back.
I thought you were going to say you liked lint (the source code checker).
We had fiber at our previous house for about six years, and it was great. The prices were lower, the speeds were greater, there were no limits… It’s kind of funny, because it was a college town of about 200K people in the middle of nothing else.
Now I’m up in the suburbs of Chicago where a single town can have a 200K population, but fiber is nowhere on the horizon. Instead we get terrible service that’s constantly showing packet loss with slow transfer rates. We do still have unlimited, but with these transfer rates it doesn’t really matter. :)
As far as monitoring traffic goes, I guess it depends on how you’re doing things. If your DNS requests are still hitting your ISP or aren’t encrypted, then yeah, they might know. I don’t know if they’ll care, but of course not all illegal content is treated the same.
So basically a non-answer to your question, along with me saying I liked having fiber.
I have one set up as an irrigation controller. I was going to build an OpenStack cluster to test configuration settings on (I run a production cluster at work), but gave up when the supply chain problems happened and prices skyrocketed.
Thank you. I hadn’t considered the payment part. The cloud system that I manage is in education, so everyone pays in advance.
This makes sense, and I’ll start with a lower number and ask it to go up later. It will take a couple of months to migrate everything from Linode anyhow, so I don’t need them all at once.
My identity infrastructure alone uses a whole bunch of servers.
There are the three Kerberos servers, the two clusters of multiple LDAP servers behind HAProxy, the rabbitmq servers to pass requests around, the web servers also balanced/HA behind HAProxy… For me, service reliability and security are two of the biggest factors, so I isolate services and use HA when available.
I told them everything that I wrote here in my original request – I need 25 now, but would like a quota of 50 to maintain elasticity, testing, etc.
They followed up with the request for actual resources needed.
I haven’t answered since then.
“Oh, that’s wonderful. Although I suppose I should mention—I’m acquainted with some people who have artificial intelligence too.”