

Goes to show, that technology can only get you so far, if your doctrine sucks.
Also, they downed one and damaged another, why did they cross of three?
Goes to show, that technology can only get you so far, if your doctrine sucks.
Also, they downed one and damaged another, why did they cross of three?
That boolean can indicate if it’s a fancy character, that way all ASCII characters are themselves but if the boolean is set it’s something else. We could take the other symbol from a page of codes to fit the users language.
Or we could let true mean that the character is larger, allowing us to transform all of unicode to a format consisting of 8 bits parts.
That requires some form of self describing format and will probably look like a sparse matrix in the end.
It might also introduce spurious data dependencies
Those need to be in the in smallest cache or a register anyway. If they are in registers, a modern, instruction reordering CPU will deal with that fine.
to store a bit you now need to also read the old value of the byte that it’s in.
Many architectures read the cache line on write-miss.
The only cases I can see, where byte sized bools seems better, are either using so few that all fit in one chache line anyways (in which case the performance will be great either way) or if you are repeatedly accessing a bitvector from multiple threads, in which case you should make sure that’s actually what you want to be doing.
C/C++ considers an nonzero number, as your true value but false is only zero. This would allow you to guard against going from true to false via bit flip but not false to true.
Other languages like rust define 0 to be false and 1 to be true and any other bit pattern to be invalid for bools.
mostly present in fairytales these days
It’s also the currency in the german language donald duck comics.
I think you mean charge not energy.
I think there is a misunderstanding, what running locally means.
You can run a gitlab runner on your local machine, but it needs to pulls it’s jobs from git. It also requires gitlab to register your runner, so it can’t really work for new contributors to use themselves.
Run your CI in a sandbox.
For example gitlab allows you to run in a docker image.
Unless the attacker knows a docker CVE or is willing to waste a specter style 0-day on you, the most they can do is waste your cpu cycles.
Apart from the obvious lack of portability, compilers write better assembly than most humans.
Maybe to build one of those shitty websites where you can’t select text because every letter is in its own element.
The format seems to be some glue to chose between different compression algorithms for the same file format and compress in chunks to be able to decompress only the parts you need.
Ups, my attention got trapped by the code and I didn’t properly read the comment.
Now do computation in those threads and realize that they all wait on the GIL giving you single core performance on computation and multi threaded performance on io.
It could be useful to reduce their numbers, as long as the female passes the mutation to its children.
I’m wondering if there’s a way [to] combine their computational power.
Only if your problem can be be split up reasonably, otherwise you will spend more time waiting for data to move.
Where it can work: video encoding, CI pipelines, data analysis
Where it won’t work: interactive stuff, most single file operations
I want to get into server hosting […]
Then you don’t need another reason to do it.
[If] I can connect them all to one display [or] make them all accessible in one place.
You can either get a hardware switch or chose a primary computer and connect to the others. For that you can use remote desktop software or be a try hard and use ssh.
I think both gcc and clang are roughly build around the C memory model.
If you want to interface with hardware you probably do volatile reads and writes to specific memory addresses.
You should be able to compile for most gcc supported platforms.
To my understanding, the original meaning of object oriented is more similar to what we call the actor model today.
In reference to the modern understanding of OO, js uses prototypal inheritance, which some consider closer to the original vision.