• 0 Posts
  • 24 Comments
Joined 2 years ago
cake
Cake day: June 29th, 2023

help-circle
  • That’s always the hard part of these “government fraud” narratives. It’s the insidious shit, the ineptitude, incompetence. Not something you can walk into the FDA and find a filing cabinet labeled “deliberate and known waste contracts”.

    I work in aerospace and the worst engineers I’ve had the displeasure of working with were on cost+ contracts (the money keeps rolling in until the job is “done”).

    The only real way to track down abuses like that is to stick an oversight committee on each and every contract, watch them like a hawk. But who watches the watchers? You run the risk at every stage, eventually you either need to trust or gamble


  • You don’t. In C everything gets referenced by a symbol during the link stage of compilation. Libraries ultimately get treated like your source code during compilation and all items land in a symbol table. Two items with the same name result in a link failure and compilation aborts. So a library and a program with main is no bueno.

    When Linux loads an executable they basically look at the program’s symbol table and search for “main” then start executing at that point

    Windows behaves mostly the same way, as does MacOS. Most RTOS’s have their own special way of doing things, bare metal you’re at the mercy of your CPU vendor. The C standard specifies that “main” is the special symbol we all just happen to use


  • I’d argue the two aren’t as different as you make them out to be. Both types of projects want a functional codebase, both have limited developer resources (communities need volunteers, business have a budget limit), and both can benefit greatly from the development process being sped up. Many development practices that are industry standard today started in the open source world (style guides and version control strategy to name two heavy hitters) and there’s been some bleed through from the other direction as well (tool juggernauts like Atlassian having new open source alternatives made directly in response)

    No project is immune to bad code, there’s even a lot of bad code out there that was believed to be good at the time, it mostly worked, in retrospect we learn how bad it is, but no one wanted to fix it.

    The end goals and proposes are for sure different between community passion projects and corporate financial driven projects. But the way you get there is more or less the same, and that’s the crux of the articles argument: Historically open source and closed source have done the same thing, so why is this one tool usage so wildly different?



  • In a round about way, it probably is geographically related. So few people live there due to the land being pretty useless, but not so useless that the people there are spread out and when county lines were drawn they followed county sizes similar to Midwestern states. More western states drew larger counties but had similar population density averages so the number of people per county are high enough that there are enough suicides that someone may actually be tracking that on an annual basis

    Arbitrarily cherry picking that squished pentagon county in northern Nebraska, there are only 769 people living in the entire county . If just one of them committed suicide that county would be off the charts lethal at 137. So when you take the US average suicide rate per year it could take up to ten years for someone in the county to commit suicide. So there probably isn’t anyone keeping real statistics in that county

    Realistically I think this is a bad map since counties with lower populations get disproportionately amplified suicide rates



  • Heh, I guess this shows my corporate software dev experience. Whenever I’ve taught git workflows it was always paired with a work ticketing system where any changes you were making were ideally all one single set of changes. If you need a feature or bug fix someone else was doing that was being done on another branch which you could pull into your code early and for tracking purposes we always made sure the other person merged into main first. The only time I’ve seen per line manipulation with git was when someone made a ton of changes in a file and wanted to revert a handful of lines.

    Everything else you mentioned I’ve had a web git host like gitlab or bitbucket for, but I kinda put that more into peer review workflow than git itself


  • That is the one use case I’ve seen where a gui is absolutely faster.

    In my line of work, I primarily work on embedded systems or process automation so any new files in the repo directory either need to be added for tracking or to the ignore file. I’m not saying it will never happen, but at least in my experience it happens so rarely that I always try to teach command line when possible



  • Every time I mentor a dev on using git they insist so much on using some GUI. Even ones who are “proficient” take way longer to do any action than I can with cli. I had one dev who came from SVN land try and convince me that TortoiseGit was the only way to go

    I died a little that day, and I never won her over to command line despite her coming to me kinda regularly to un-fuck her repository (still one of the best engineers I ever worked with and I honestly miss her… Just not her source control antics)





  • To me 16 is long haha.

    I usually end up running with 16 characters since a lot of services reject longer than 20 and as a programmer I just like it when things are a power of two. Back in the Dark Times of remembering passwords my longest was 13 characters so when I started using a password manager setting them that long felt wild to me.

    I do have my bank accounts under a 64 character password purely because monkey brain like seeing big security rating in keepass. Entropy go brrrrrrrrrrrr


  • I’ve used cloud based services for password managers for work and “self host” my personal stuff. I barely consider it self hosting since I use Keepass and on every machine it’s configured to keep a local cached copy of the database but primarily to pull from the database file on my in-home NAS.

    Two issues I’ve had:

    Logging into an account on a device currently not on my home network is brutal. I often resort to simply viewing the needed password and painstakingly type it in (and I run with loooooong passwords)

    If I add or change a password on a desktop and don’t sync my phone before I leave, I get locked out of accounts. Two years rocking this setup it’s happened three times, twice I just said meh I don’t really need to do this now, a third time I went through account recovery and set a new password from my phone.

    Minor complaint:

    Sometimes Keepass2Android gets stuck trying to open the remote database and I have to let it sit and timeout (5 minutes!!!) which gets really annoying but happens very infrequently which is why I say just minor complaint

    All in all, I find the inconvenience of doing the personal setup so low that to me even a $10 annual subscription is not worth it


  • Combination of anti large company sentiment + people feeling entitled to get things for free if I had to guess. It also usually feels wrong when a corporation threatens a lawsuit over a single person since the US court system heavily favors the person with more money and it’s probably a true statement to say that Nintendo has more resources than the lead dev.

    Modern Vintage Gamer on YouTube had an interesting take in that by stifling emulator development now it will hurt the industry in the long run because Switch exclusives will become increasingly difficult to play once support ends (an argument I myself don’t find all that compelling)

    Nerrel on YouTube has a well put together and researched video on emulation where at least in the US it’s been tested in court several times that emulators are legal, but obtaining the code for the emulators to run is almost always not since you usually have to make a copy and that violates the publisher’s right to copy



  • MajorasMaskForever@lemmy.worldtoProgramming@programming.dev...
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    2
    ·
    1 year ago

    Ada

    It has a lot of really nice features for creating data types and has amazing static analysis during compile time.

    But all the tooling around it is absolute crap making using the language unbearable and truly awful. If it had better tooling I could see that it would have taken a decent chunk of development away from C and C++



  • So many people forget that while they understand how to use a Linux terminal and how Linux on a high level works, not everyone does. Plus, learning all of that takes time, effort, and tenacity, which not everyone is willing to do. Linus’s whole conclusion was that as long as that learning curve exists and as long as it’s that easy to shoot yourself in the foot, Linux desktop just isn’t viable for a lot of people.

    But Linus has done a lot of public fuck ups therefore everything he says must be inherently wrong.