• 5 Posts
  • 103 Comments
Joined 2 years ago
cake
Cake day: June 9th, 2023

help-circle
  • Then you’d be surprised when you calculate the numbers!

    A Falcon 9 delivers 13100kg to LEO and has 395,700kg propellant in 1st stage and 92,670kg in 2nd stage. Propellant in both is LOX/RP-1. RP-1 is basically long chains of CH2, so together they burn as:

    3 O2 (3x32) + 2 CH2 (2x14) -> 2 CO2 (2x44) + 2 H2O (2x18)
    

    Which is 2*44/(2*44+2*18) = 71% CO2. Meaning each launch makes (395700+92670)*.71 = 347 tons CO2 or 347/13.1 = 26.5 tons of CO2 per ton to orbit. A lot of it is burned in space, but I’m guessing the exhaust gases don’t reach escape velocity so they all end up in the atmosphere anyway.

    As for how much a compute satellite weighs, there is a wider range of possibilities, since they don’t exist yet. This is China launching a test version of one, but it’s not yet an artifact optimized for compute per watt per kilogram that we’d imagine a supercomputer to be.

    I like to imagine something like a gaming PC strapped to a portable solar panel, a true cubesat :). On online shopping I currently see a fancy gaming PC at 12.7kg with 650W, and a 600W solar panel at 12.5kg. Strap them together with duct tape, and it’s 1000/(12.7+12.5)*600 = 24kW of compute power per ton to orbit.

    Something more real life is the ISS support truss. STS-119 delivered and installed S6 truss on the ISS. The 14,088kg payload included solar panels, batteries, and truss superstructure, supplying last 25% of station’s power, or 30kW. Say, double that to strap server-grade hardware and cooling on it. That’s 1000*30/(2*14088) = 1.1kW of compute per ton to orbit. A 500kg 1kW server is overkilling it, but we are being conservative here.

    In my past post I’ve calculated that fossil fuel electricity on Earth makes 296g CO2 per 1 kilowatthour (using gas turbine at 60% efficiency burning 891kJ/mol methane into 1 mol CO2: 1kJ/s * 3600s / 0.6 eff / (891kJ/mol) * 44g/mol = 296g, as is the case where I live).

    The CO2 payback time for a ton of duct taped gamer PC is 1000kg * 26.5kg CO2/kg / ( 24kW * 0.296kg/kW/hour) / (24*365) = 0.43 years. The CO2 payback time for a steel truss monstrosity is `1000kg * 26.5kg/kg / (1.1kW * 0.296kg/kW/hour) / (24*365) = 9.3 years.

    Hey, I was pretty close!





  • PostUp = ip route add 100.64.0.0/10 dev tailscale0
    

    Looks like you need to stick this line in the tailscale service file, since it’s the only time that the existence of the tailscale0 device is guaranteed. If you don’t want to modify the service file inside the package, could you write your own systemd service file and include the tailscale service as a prerequisite?

    Also make sure that when you start the VPN first and then tailscale, you don’t get a double tunnel situation where tailscale goes out through the VPN (unless that’s what you wanted).





  • IMHO if you don’t have a globally-reachable address or forwarded port, you are not really a participant of the internet, you are just a receptacle xD

    One service I never see mentioned is OVPN. They have a 1-to-1 feature parity with mullvad and were an easy drop-in replacement when mullvad closed their ports:

    • wireguard
    • port forwarding
    • no usernames/emails/registration, only account numbers
    • crypto payments/cash in the mail
    • same price as mullvad
    • multiple device keys
    • multihop
    • no bandwidth limits
    • setup guides
    • status dashboard

    I used mullvad for years, sad to see them go, and all my scripts basically worked without any change other than the server addresses/public keys. Only downside is they don’t have as many users so not as many servers. I wish more people would join up so I get more IPs to choose from :D


  • Yeah, I concede that small caps are more likely to be carried away by rainwater than whole bottles :D. What I meant was that for every loose cap on the ground there is a bottle lying around somewhere, and also there are bottles with caps on. No one is tossing their cap into the bushes and then taking the bottle to the recycling center.



  • TauZero@mander.xyztoScience Memes@mander.xyzIrresistible
    link
    fedilink
    English
    arrow-up
    6
    ·
    11 months ago

    For an object heavier than the Earth, 1g radius will be greater than the radius of Earth. For 56 Earth masses that’s sqrt(56) times bigger = 48000km.

    A 56 Earth mass black hole will take 5.5e55 years to evaporate according to this calculator. A 100kg black hole (more close to what Richard used to be) is much smaller than the nucleus of an atom and will evaporate in 0.05 nanoseconds.

    Curiously there was a paper recently that calculated that even if there was a small black hole in the center of the Sun, it would take millions of years for it to grow, because the aperture is so small not much can fit through, and the infalling gas heats up so much as to repel the rest, creating an internal hot bubble.


  • I pick up street litter, and having picked up thousands of pounds, I have never felt that loose caps are a problem, let alone one that requires such a solution. The number of littered bottles, with or without a cap, is greater than the number of loose caps, and the amount of plastic in every bottle dwarfs the plastic in a cap. Fixing the cap to the bottle will do nothing to improve the recycling rate of plastic if entire bottles are already tossed anyway.

    I consider the idea of cap tethers as adversarial memetic warfare thrust upon us for some unknown ulterior purpose, possibly to make us hate the very idea of environmental consciousness. Same as paper straws. I like plastic bag bans though.

    As far as picking litter is concerned, I personally prefer finding bottles without a cap. At least those are empty, all liquid having evaporated after the bottle has spent several months in the bushes. The capped bottles are often half-full and are just nasty. (Who even pays for a bottle of drink and not drinks half of it anyway?)


  • One example: on May 30, 2020 in Minneapolis, during the protests after George Floyd killing, some police were driving around the streets in an unmarked van, shooting at pedestrians at random without stopping. They later claimed they were shooting rubber bullets to “encourage” people to obey the curfew order. Of course, if you are the one being shot at in a drive by from a mystery van, you have no time to determine what kind of bullets are being fired… One pedestrian shot back! There is a video of it.

    The guy was immediately arrested, miraculously without being shot to death in the process, and put on trial, but acquitted due to justifiable self-defense. The police did not drive around shooting randomly any more after that though. I see the guy even won a $1.5M lawsuit against the police now!



  • I’d love to use ISO sizes, but even if I know that I need a 40-622 wheel, there is no way to search for it on the storefront if every single seller made gross mistakes in labeling their product! I have to ignore the specs shown entirely and make educated guesses based on title alone. For example “WHEEL AL 700 FRONT ALEX AP18 QR Silver UCP” in the picture is almost certainly a 700C wheel and NOT an 18-inch wheel. The “18” in the title probably stands for 18mm rim width, which means that this wheel will fit my bike and tire, but is a bit more narrow than ideal 23mm. The sellers must be copying the title verbatim from the manufacturer, and then haphazardly filling out the specifications without knowing or understanding the actual numbers. The ISO size is not mentioned at all.





  • Some notes for my use. As I understand it, there are 3 layers of “AI” involved:

    The 1st is a “transformer”, a type of neural network invented in 2017, which led to the greatly successful “generative pre-trained transformers” of recent years like GPT-4 and ChatGPT. The one used here is a toy model, with only a single hidden layer (“MLP” = “multilayer perceptron”) of 512 nodes (also referred to as “neurons” or “dimensionality”). The model is trained on the dataset called “Pile”, a collection of 886GB text from all kinds of sources. The dataset is “tokenized” (pre-processed) into 100 billion tokens by converting words or word fragments into numbers for easier calculation. You can see an example of what the text data looks like here. The transformer learns from this data.

    In the paper, the researchers do cajole the transformer into generating text to help understand its workings. I am not quite sure yet whether every transformer is automatically a generator, like ChatGPT, or whether it needs something extra done to it. I would have enjoyed to see more sample text that the toy model can generate! It looks surprisingly capable despite only having 512 nodes in the hidden layer. There is probably a way to download the model and execute it locally. Would it have been possible to add the generative model as a javascript toy to supplement the visualizer?

    The main transformer they use is “model A”, and they also trained a twin transformer “model B” using same text but a different random initialization number, to see whether they would develop equivalent semantic features (they did).

    The 2nd AI is an “autoencoder”, a different type of neural network which is good at converting data fed to it into a “more efficient representation”, like a lossy compressor/zip archiver, or maybe in this case a “decompressor” would be a more apt term. Encoding is also called “changing the dimensionality” of the data. The researchers trained/tuned the 2nd AI to decompose the AI models of the 1st kind into a number of semantic features in a way which both captures a good chunk of the model’s information content and also keeps the features sensible to humans. The target number of features is tunable anywhere from 512 (1-to-1) to 131072 (1-to-256). The number they found most useful in this case was 4096.

    The 3rd AI is a “large language model” nicknamed Claude, similar to GPT-4, that they have developed for their own use at the Anthropic company. They’ve told it to annotate and interpret the features found by the 2nd AI. They had one researcher slowly annotate 412 features manually to compare. Claude did as well or better than the human, so they let it finish all the rest on its own. These are the descriptions the visualization shows in OP link.

    Pretty cool how they use one AI to disassemble another AI and then use a 3rd AI to describe it in human terms!



  • Can’t access the article, but wasn’t China the one most vulnerable from the Malacca Strait being a chokepoint? As in, their trade towards Europe and fuel from the Middle East being potentially threatened? How does Thailand pitching to the US make sense then? How would a Thai bypass even increase security, since both routes are in the same area and can be equally blockaded? There aren’t any problems with throughput capacity at Malacca, unlike say at the Panama Canal. Maybe it will make the travel distance slightly shorter, but is there really any way it could ever be cost-effective to offload and reload ships for a few hundred kilometers savings?