

My motherboard has two NVMe slots. I imagine that if I’d had the funds and desire to populate both of them, this same issue could rear its ugly head.
Some middle-aged guy on the Internet. Seen a lot of it, occasionally regurgitating it, trying to be amusing and informative.
Lurked Digg until v4. Commented on Reddit (same username) until it went full Musk.
Was on kbin.social (dying/dead) and kbin.run (mysteriously vanished). Now here on fedia.io.
Really hoping he hasn’t brought the jinx with him.
Other Adjectives: Neurodivergent; Nerd; Broken; British; Ally; Leftish
My motherboard has two NVMe slots. I imagine that if I’d had the funds and desire to populate both of them, this same issue could rear its ugly head.
A lot of the original C coders are still alive or only very recently gone (retired, or the ultimate retirement, so to speak), and they carried their cramped coding style with them from those ancient and very cramped systems. Old habits die hard. And then there’s a whole generation who were self-taught or learned from the original coders and there’s a lot of bad habits, twisted thinking and carry-over there too.
(You should see some of my code. On second thought, it’s probably best you don’t.)
For writing loops, many early BASICs had FOR/NEXT, GOTO [line] and GOSUB [line] and literally nothing else due to space constraints. This begat much spaghetti. Better BASICs had (and have) better things like WHILE and WEND, named subroutines (what a concept!) and egads, no line numbers, which did away with much of that. Unless you were trying to convert a program written for one of the hamstrung dialects anyway, then all bets are off.
Assembly style often reflects the other languages people have learned first, or else it’s written to fit space constraints and then spaghettification can actually help with that. (Imagine how the creators of those BASICs crammed their dialect into an 8 or 16K ROM. And thus, like begetteth like.)
C code style follows similarly. It is barely concealed assembly anyway.
COBOL requires a certain kind of masochist to read and write. That’s not spaghetti, it’s Cthulhu’s tentacles. Run.
You jest, but on some older computers, all ones was the official truth value. Other values may also have been true in certain contexts, but that was the guaranteed one.
Depends if you go with the original idea, or the battery idea designed by Hollywood execs who didn’t think the audiences would understand.
… thus proving that Hollywood execs and the people they make their changes for are only good for batteries*, but I digress.
* For legal reasons, this is a joke. I have to say this because some Hollywood execs have more lawyers than braincells**.
** For all the same reasons, this is also a joke.
Oh, I guarantee that pi is 100% normal. Just not necessarily in the base you want it to be normal in.
Turkish has (and maybe related languages have) genderless pronouns, but I don’t know whether that context shifts elsewhere in the sentence structure or not, and how necessary it might be in legal contracts.
It’s a bit vanilla but I like DejaVu Sans Mono 8pt in my terminal, which is where I edit scripts and things
Curiously, I don’t think that looks quite as good at larger sizes, so I’ve been using Liberation Mono 9pt or 10pt elsewhere.
Both of those have distinct glyphs for the usual easily confused candidates. Can’t be having my lowercase L’s and 1s looking similar.
For a certain set of inputs, yes. Good luck guessing what that comprises that set even if 1) there’s documentation and 2) you read it.
Worse, for all we know, double
actually adds a thing to itself, which might accidentally or deliberately act on strings. Dividing by two has no such magic.
If endl
is a function call and/or macro that magically knows the right line ending for whatever ultimately stores or reads the output stream, then, ugly though it is, endl
is the right thing to use.
If a language or compiler automatically “do(es) the right thing” with \n
as well, then check your local style guide. Is this your code? Do what you will. Is this for your company? Better to check what’s acceptable.
If you want to guarantee a Unix line ending use \012
instead. Or \cJ
if your language is sufficiently warped.
I see I’ve forgotten to put on my head net today. You know the one. Looks like a volleyball net. C shape. Attaches at the back. Catches things that go woosh.
Those “almost completely forgotten” characters were important when ASCII was invented, and a lot of that data is still around in some form or another. There’s also that, since they’re there, they’re still available for the use for which they were designed. You can be sure that someone would want to re-invent them if they weren’t already there.
Some operating systems did assign symbols to those characters anyway. MS-DOS being notable for this. Other standards also had code pages where different languages had different meanings for the byte ranges beyond ASCII. One language might have “é” in one place and another language in another. This caused problems.
Unicode is an extension of ASCII that covers all bases and has all the necessary symbols in fixed places.
That languages X, Y and Z don’t happen to have their alphabets in contiguous runs because they’re extended Latin is a problem, but not something that much can be done about.
It’s understandable that anyone would want their alphabet to be the base language, but one has to be or you end up in code page hell again. English happened to get there first.
If you want a fun exercise (for various interpretations of “fun”), design your own standard. Do you put the digits 0-9 as code points 0-9 or do you start with your preferred alphabet there? What about upper and lower case? Which goes first? Where do you put Chinese?
It’s a “joke” because it comes from an era when memory was at a premium and, for better or worse, the English-speaking world was at the forefront of technology.
The fact that English has an alphabet of length just shy of a power of two probably helped spur on technological advancement that would have otherwise quickly been bogged down in trying to represent all the necessary glyphs and squeeze them into available RAM.
… Or ROM for that matter. In the ROM, you’d need bit patterns or vector lists that describe each and every character and that’s necessarily an order of magnitude bigger than what’s needed to store a value per glyph. ROM is an order of magnitude cheaper, but those two orders of magnitude basically cancel out and you have a ROM that costs as much to make as the RAM.
And when you look at ASCII’s contemporary EBCDIC, you’ll realise what a marvel ASCII is by comparison. Things could have been much, much worse.
Flag Admiral Stabby earned that knife.
Somewhat ironically, it was about 10 years ago that I had to quit, and that was because of my mental health.
In my case, I’m a vanilla cis-het male, but if you go out along that other axis, the one that’s neurodivergence, well, that’s where years of trying to get by in a world heavily geared to neurotypicals finally took its toll and my brain just couldn’t take it any more.
This must be the new landscape. Before I had to quit, the male-dominated IT landscape I worked in had no apparent cross-dressers. Or furries for that matter. Admittedly, the companies were relatively small so maybe they didn’t hit the threshold for there necessarily being someone who didn’t present as cis male.
A handful of gay dudes, sure, but pretty sure none of them dressed this way. Even if one of them hit some level of stereotype and did drag in their spare time - which I have no evidence of - that’s not the same as whatever this is.
Technically, if you ignore the inherent contradiction in the name, some languages treat NaN
as a falsy number and the IEEE standards admit trillions of possible NaN
s.
Ada is a language that leaves a lot of things “implementation dependent” as it’s not supposed to grant easy access to underlying data types like those you’ll find in C, or literally on the silicon. You’re supposed to be able to declare your own integer type of any size and the compiler is supposed to figure it out. If it chooses to use a native data type, then so be it.
This doesn’t guarantee the correctness of the compiler nor the programmer who absolutely has to work with native types because it’s an embedded system though.
This has ended in disaster at least once: https://itsfoss.com/a-floating-point-error-that-caused-a-damage-worth-half-a-billion/
In this instance, I think there was some suggestion to write code in mostly lower case, including all user variables, or at least inCamelCaseLikeThis with a leading lower case letter, and so to make True and False stand out, they’ve got to be capitalised.
I mean. They could have been TRUE and FALSE. Would that have been preferable? Or how about a slightly more Pythonic style: __true__ and __false__
Reminds me of the company where one of the top brass tried to unmount an important fileshare with rm. That was the day they found out that they didn’t have recent backups of a, shall we say disquieting, amount of important information and people’s work.
Staff started taking their own private backups of important things after that.