• 0 Posts
  • 305 Comments
Joined 2 years ago
cake
Cake day: July 9th, 2023

help-circle




  • I’ve actually noticed this exact thing with elevators before… I was kind of amazed the beep and light were hooked up completely independently from the actual floor selection logic.
    It sort of makes sense that the light in the button would just be hooked directly up to the button contacts. The computer would then be polling the buttons separately and it’s possible to miss a button press… These sorts of buttons shouldn’t need a debounce period since pressing any of them a second time doesn’t do anything. If the buttons were interrupt based, this probably wouldn’t happen.



  • xthexder@l.sw0.comtoScience Memes@mander.xyzBlack Mirror AI
    link
    fedilink
    English
    arrow-up
    1
    ·
    23 days ago

    Anything that’s per-commit is part of the “build” in my opinion.

    But if you’re running a language server and have stuff like format-on-save enabled, it’s going to use a lot more power as you’re coding.

    But like you said, text editing is a small part of the workflow, and looking up docs and browsing code should barely require any CPU, a phone can do it with fractions of a Watt, and a PC should be underclocking when the CPU is underused.


  • xthexder@l.sw0.comtoScience Memes@mander.xyzBlack Mirror AI
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    23 days ago

    It sounds like it does save you a lot of time then. I haven’t had the same experience, but I did all my learning to program before LLMs.

    Personally I think the amount of power saved here is negligible, but it would actually be an interesting study to see just how much it is. It may or may not offset the power usage of the LLM, depending on how many questions you end up asking and such.


  • xthexder@l.sw0.comtoScience Memes@mander.xyzBlack Mirror AI
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    23 days ago

    I didn’t even say which direction it was misleading, it’s just not really a valid comparison to compare a single invocation of an LLM with an unrelated continuous task.

    You’re comparing Volume of Water with Flow Rate. Or if this was power, you’d be comparing Energy (Joules or kWh) with Power (Watts)

    Maybe comparing asking ChatGPT a question to doing a Google search (before their AI results) would actually make sense. I’d also dispute those “downloading a file” and other bandwidth related numbers. Network transfers are insanely optimized at this point.






  • Thanks for responding. I’m not really a web dev, so I haven’t thought about it much.

    The tab layout and <div> examples were definitely not things I was thinking about. I guess that’s a good incentive to use tags like <section> and <article> instead of divs with CSS classes.

    I’m actually a bit color blind myself, so I appreciate sites being high contrast and not relying on color alone for indicators. A surprising number of sites completely break when trying to zoom in and make text bigger too, which is often due to bad floating layouts. Especially if it’s resized with JS…




  • My opinion is that including trans people in this sort of study actually reduces the bias, because they’re the only people who will have experienced the social impacts of presenting both male and female at different times. All cis-gendered people will be inherently biased towards their own limited experience.