

That’s just like your opinion man.
That’s just like your opinion man.
Yeah for sure there’s ton of clickbait, but this isn’t “a minor technical matter”. The news here isn’t the clash over whether the patch should be accepted in the RC branch, but the fact that Linus said he wants to remove bcachefs from the kernel tree.
I’m sure many people don’t even think about that. Having to reinstall all your packages from scratch is not something they do frequently.
And for the people who are looking to optimize the initial setup, there are many ways to do it without a declarative package manager. You can:
So the SSD is hiding extra, inaccessible, cells. How does
blkdiscard
help? Either the blocks are accessible, or they aren’t. How are you getting a the hidden cells withblkdiscard
?
The idea is that blkdiscard
will tell the SSD’s own controller to zero out everything. The controller can actually access all blocks regardless of what it exposes to your OS. But will it do it? Who knows?
I feel that, unless you know the SDD supports secure trim, or you always use
-z
,dd
is safer, sinceblkdiscard
can give you a false sense of security, and TRIM adds no assurances about wiping those hidden cells.
After reading all of this I would just do both… Each method fails in different ways so their sum might be better than either in isolation.
But the actual solution is to always encrypt all of your storage. Then you don’t have to worry about this mess.
I don’t see how attempting to over-write would help. The additional blocks are not addressable on the OS side. dd
will exit because it reached the end of the visible device space but blocks will remain untouched internally.
The Arch wiki says
blkdiscard -z
is equivalent to runningdd if=/dev/zero
.
Where does it say that? Here it seems to support the opposite. The linked paper says that two passes worked “in most cases”, but the results are unreliable. On one drive they found 1GB of data to have survived 20 passes.
in this case, wiping an entire disk by dumping /dev/random must clean the SSD of all other data.
Your conclusion is incorrect because you made the assumption that the SSD has exactly the advertised storage or infinite storage. What if it’s over-provisioned by a small margin, though?
He didn’t say anything about Nazism being an opinion you disagree with.
This is literally the only point the article makes and there’s no point even discussing it further if you’re too blind or dishonest to admit that.
You don’t have to trust Drew, though. Vaxry is pretty clear on his stance on the subject.
if I run a discord server around cultivating tomatoes, I should not exclude people based on their political beliefs, unless they use my discord server to spread those views.
which means even if they are literally adolf hitler, I shouldn’t care, as long as they don’t post about gassing people on my server
that is inclusivity
Source: https://blog.vaxry.net/articles/2023-inclusiveActivists
Note how this article is not where he first stated the above. This article is where he doubles down on the above statement in the face of criticism. In the rest of the article he presents nazism as an opinion people might have that you disagree with. He argues that his silent acceptance of nazis is the morally correct stance while inclusive communities are toxic actually.
This means that it’s not just Drew or the FDO who are arguing that Vaxry’s complete lack of political stance is creating safe spaces for fascists. It’s Vaxry himself that explicitly states this is happening and that it’s intentional on his part.
C is pretty much the standard for FFI, you can use C libraries with Rust and Redox even has their own C standard library implementation.
Right, but I’m talking specifically about a kernel which supports building parts of it in C. Rust as a language supports this but you also have to set up all your processes (building, testing, doc generation) to work with a mixed code base. To be clear, I don’t image that this part is that hard. When I called this a “more ambitious” approach, I was mostly referring to the effort of maintaining forks of linux drivers and API compatibility.
Linux does not have a stable kernel API as far as I know, only userspace API & ABI compatibility is guaranteed.
Ugh, I forgot about that. I wonder how much effort it would be to keep up with the linux API changes. I guess it depends on how many linux drivers you would use, since you don’t need 100% API compatibility. You only need whatever is used by the drivers you care about.
Does it have to be Linux?
In order to be a viable general use OS, probably yes. It would be an enormous amount of effort to reach a decent range of hardware compatibility without reusing the work that has already been done. Maybe someone will try something more ambitious, like writing a rust kernel with C interoperability and a linux-like API so we can at least port linux drivers to it as a “temporary” solution.
Right, so this is exactly the sort of “benefit” I never expect to see. This is not something that has happened to me in ~25 years of computer use, and if it does happen there are better ways to deal with it. Btrfs and zfs have quotas for this, but even if they didn’t it would not be worth the tradeoff for me. Mispredicting the partition sizes I’ll end up needing after years of use is both more likely to happen and more tedious to fix.
Are you going to dual boot? Do you have some other special requirement? If not, there’s no reason to overthink partitioning in my opinion. I did this for my main NVME:
I use a swap file so I don’t use a swap partition. If you want more control over specific parts of the filesystem, eg a separate /home that you can snapshot or keep when reinstalling the system, then use btrfs subvolumes. This gives you a lot of the features a partition would give you without committing to a specific size.
This is the only partitioning scheme I have never regretted. When I’ve tried to do separate partitions I find myself always regretting the sizes I’ve allocated. On the other hand, I have not actually seen any benefit of the separation in practice.
How is this any less meaningful than any other use case? Is downloading a distro to play video games ok? To shitpost on social media? To watch clickbait videos on youtube? Why is this in particular a bad use of resources?
How many months should he have waited for an authoritative response?
Well, Marcan should wait as long as feels right to him. As I said previously, I’m pretty sure he was already pissed off about previous R4L issues and he didn’t quit because of this alone. I want to be clear that I’m commenting solely on the expectation of a swifter response from leadership in the original email thread and not on Marcan’s decision to step down, which I can’t be the judge of.
So, I expect people in places of power to take their time when they respond publicly to issues like this, for various reasons. Eg:
At the very least, I would have waited to see what happens with the patches if I were in his position. The review process, which kept going in the meantime, essentially sets a timer for a decision to be made. In the end, Hellwig’s objections would either be acknowledged as blocking or they would be ignored. In any case there would have been a clear stance from the project’s leadership. It makes sense to me to wait for this inevitable outcome before making a committal decision such as stepping down.
Cristoph Hellwig’s initial message was on 2025-01-08. Marcan’s stepping down was on 2025-02-07. So no, it’s not several months; it’s barely one month. Getting in fights in mailing lists and making social media posts is not everyone’s first reaction and it is arguably not the best reaction, especially for people in places of power. It is silly for Marcan to demand everybody’s reaction to be as loud and as quick as his own.
It was very clear that the reaction was going to be no reaction.
Well, it turns out that the reaction was pretty clear not “no reaction”. That’s the reason this thread we’re talking in exists. Marcan was objectively wrong if he assumed Hellwig’s comments and nack would be accepted. Instead, Hellwig was explicitly called out for having no say on the matter and for producing “garbage arguments”.
Marcan is not the submitter. Unless I’ve missed something, the submitter is still working on the patch.
Marcan was probably fed up and was looking for a reason leave. If that’s not the case, then it’s silly for him to just quit mid-discussion, before it’s even become apparent what the reaction to Cristoph Hellwig’s behavior would be and whether his reply would even be taken into account during the review process.
Arch doesn’t require you to “read through all changelogs”. It only requires that you check the news. News posts are rare, their text is short, and not all news posts are about you needing to do something to upgrade the system. Additionally, pacman wrappers like paru
check the news automatically and print them to the terminal before upgrading the system. So it’s not like you have to even remember it and open a browser to do it.
Arch is entirely about “move fast and break stuff”.
No, it’s not. None of the things that make Arch hard for newbies have to do anything with the bleeding edge aspect of Arch. Arch does not assume your use case and will leave it up to you to do stuff like edit the default configuration and enable a service. In case of errors or potential breakage you get an error or a warning and you deal with it as you see fit. These design choices have nothing to do with “moving fast”. It’s all about simplicity and a diy approach to setting up a system.
The latter is I think aiming for Linux ABI compatibility.
I had never hard of Asterinas, but this sounds like a the best approach to me. I believe alternative OS’s need to act as (near) drop-in replacements if they want to be used as daily drivers. ABI-incompatible alternatives might be fine for narrower use cases, but most people wouldn’t even try out a desktop OS that doesn’t support most of the hardware and software they already use.
The common misconception that swap is pointless stems from misunderstanding what it’s supposed to do. You shouldn’t be triggering the OOM killer frequently anyway. In the much more normal case where you’re only using some of your RAM for running applications, the rest is used as a filesystem cache/buffer. Having swap space available gives your OP the option to evict stale application memory from RAM rather than the filesystem cache when that would be the optimal choice to make.
This page explains it detail: https://chrisdown.name/2018/01/02/in-defence-of-swap.html