• 7 Posts
  • 401 Comments
Joined 1 year ago
cake
Cake day: March 22nd, 2024

help-circle



  • You misinterpreted my, to be fair, vague statement. I meant AA is seemingly a bad source to read about opposition parties like the PKK, because of the obvious conflict of interest.

    I mean, AP is a pretty decent source. It’s a nonprofit coop stretching back to 1846 in a country with, err, could-be-worse press freedom history, while AA has been explicitly state run since 1920, somewhat akin to VOA, BBC, Al Jazeera or RT I guess.

    And yes, I know, AP is still an objectively bad source for specific topics, you don’t have to drill that in. So would whoever shills for the PKK, in some respects. But I’m not playing the game of “they did this and this, they can’t be trusted like them and them!” either. One has to look for conflict of interests everywhere, but it’s also okay to respect the good work long running institutions have done (like AA and this article).




  • Interesting source. It’s basically a nationalized Turkish outlet:

    https://en.m.wikipedia.org/wiki/Anadolu_Agency

    After the Justice and Development Party (AKP) took power, AA and the Turkish Radio and Television Corporation (TRT) were both restructured to more closely reflect the administration line. According to a 2016 academic article, “these public news producers, especially during the most recent term of the AKP government, have been controlled by officials from a small network close to the party leadership.”

    Still, the writing is flat in a good way? I have found that reporting from politically captured sources (say, RT) can be conspicuously good, if it’s on an international subject that aligns with their incentives. For instance, Turkey’s AKP is no fan of Netanyahu, hence AA is motivated to produce (seemingly) original reporting like this.



  • At risk of getting more technical, some near-future combination of bitnet-like ternary models, less-autoregressive architectures, taking advantage of sparsity, and models not being so stupidly general-purpose will bring inference costs down dramatically. Like, a watt or two on your phone dramatically. AI energy cost is a meme perpetuated by Altman so people will give him money, kinda like a NFT scheme.

    …In other words, it’s really not that big a deal. Like, a drop in the bucket compared to global metal production or something.

    The cost of training a model in the first place is more complex (and really wasteful at some ‘money is no object’ outfits like OpenAI or X), but also potentially very cheap. As examples, Deepseek and Flux were trained with very little electricity. So was Cerebras’s example model.



  • It’s politicized.

    It even works in hindsight. I pointed out some cherished fan remaster of a TV show made years ago was machine learning processed, which apparently everyone forgot. I got banned from the fandom subreddit for the no AI rule.

    The ironic thing is this works in corpo AI slop’s favor, as anti-AI sentiment hurt locally runnable, open weight models and earnest efforts more than anything.


  • Let’s look at a “worst case” on my PC. Let’s say 3 attempts, 1 main step, 3 controlnet/postprocessing steps, so 64-ish seconds of generation at 300W above idle.

    …That’s 5 watt hours. You know, basically the same as using photoshop for a bit. Or gaming for 2 minutes on a laptop.

    Datacenters are much more efficient because they batch the heck out of jobs. 60 seconds on a 700W H100 or MI300X is serving many, many generations in parallel.

    Not trying to be critical or anything, I hate enshittified corpo AI, but that’s more-or-less what generation looks like.





  • Musk has quite a “tech bro” following (which we don’t see because we don’t live and breathe on Twitter and such), and that group wields enormous psychological influence over the population.

    Seems unlikely, but If Musk aligns himself with Peter Theil, Zuckerberg, critical software devs and such more closely, that’s an extremely dangerous platform for Trump. They can sap power from MAGA (and anyone else) with the flip of a switch.

    There’s quite a fundamental incompatibility between tech oligarchs and the red meat MAGA base, too, as is already being exposed. It’s going to get much more stark over the next few years.



  • Funny thing is correct json is easy to “force” with grammar-based sampling (aka it literally can’t output invalid json) + completion prompting (aka start with the correct answer and let it fill in whats left, a feature now depreciated by OpenAI), but LLM UIs/corporate APIs are kinda shit, so no one does that…

    A conspiratorial part of me thinks that’s on purpose. It encourages burning (read: buying) more tokens to get the right answer, encourages using big models (where smaller, dumber, (gasp) prompt-cached open weights ones could get the job done), and keeps the users dumb. And it fits the Altman narrative of “we’re almost at AGI, I just need another trillion to scale up with no other improvements!”



  • This is the mark of oppressive regimes, right? The details aren’t what matters (or, really, how they were drawn), but the plausible appearance of conclusions that fit the party line. Criticism? Just ignorable dissidents that no one important will hear.

    We’re going to see a lot more of this, as chatbots are, unfortnately, rather good at making shallowly plausible walls of text. It’s easy for a lazy, incompetent person to do.

    AI can be used to fact check papers too (for example, programatically following citations to see if they’re real, uncontroversial, or at least somewhat sensible), but it’s more technical to implement, and even if it wasn’t, that doesn’t even matter. This admin can simply shrug of any longwinded criticism as partisan and move onto the next controversy, knowing full well attention spans are too short to care. In other words, the information environment is the fundamental issue here.