

Fool me once, shame on you. Fool me twice, shame on me.
Fool me once, shame on you. Fool me twice, shame on me.
In the early 2ks, computer were ugly grey box with noisy fan and a hard drive that gave the impression a cockroach colony were trying to escape your case. I wanted to build a silent computer to watch Divx movies from my bed, but as a broke teen, I just had access to disposed hardwares I could find there and there.
I dismantled a power supply, stuck mosfets to big mother fucking dissipator, and I had a silent power supply. I put another huge industrial dissipator on CPU (think it was an AMD k6 500Mhz) and had fanless cooling. Remained the hard drive.
Live CD/USB weren’t common at that time. I’ve discovered a live CD distrib (I think it was Knoppix) that could run entirely from RAM.
I removed hard drive, boot on live distrib, then replace CD by my Divx and voila.
Having a fanless-harddriveless computer was pure science fiction for me and my friends at that time.
Meshroom is FOSS, relatively easy to use and works out of the box. It is industrial quality and is widely used in cinema and untertainment industry. So, I think it’s the go to if you want something robust and usable.
NVidia provide several reasearch software to do Radiance Field stuff on their github: https://github.com/NVlabs. They gives impressive results, but none of them is user friendly. It’s reasearch stuff.
I gave a try to ollama.nvim, but I’m not conviced (not by the plugin, but by using LLM directly in IDE). Because of security reasons, I cannot send code to public LLMs, so I have to either use my company’s internal LLM (GPT4o), but which just have a front end, no API. Either I have to use local LLM through ollama for ex.
I tried several models, but they are to slow or too dumb. In the end, when I need help, I copy/past code into LLM portal front end.
The point of self hosting is not to engage people going in your server. The point of self hosting is to have control over your infrastructure. It’s like renting or buying a home.
When you buy a home, you don’t complain that no one wants to sleep in your home 😆
It’s a bit strange. On one side you talk about a project you work on, so I expect a repo on github or something, on the other side the link you post redirect to a product or a service you seems to sell.
It’s cool if you can make money with it, but to be more effective you might have to clarify your point.
I don’t even understand what that guy is trying to sell. Is it some kind of picture of a monkey ?
auto v = std::vector<bool>(8);
bool* vPtr = v.data;
vPtr[2] = true;
// KABOOM !!!
I’ve spent days tracking this bug… That’s how I learned about bool specialisation of std::vector.
Not all heroes wear capes.
Poor internet connection/no internet at all, network latency too high for their needs, specific fine tuned LLM ?
Off course, main reason is privacy. My company host its own GPT4 chatbot, and forbid us to use public ones. But I suppose there are other legit use case to host its own LLM.
Maybe worth to mention that bitwarden also propose bitwarden.eu to host data in Europe. I’ve used bitwarden.com for years, and switch to bitwarden.eu a few month ago because of reasons, you know…
Also, water you are drinking has probably been peed by dinosaure. Several time. But probably not peed by a human.
Small word about OpenGL, as it seems confusing for many peoples:
OpenGL is a spec written by Kronos group. There is no such thing as OpenGL library, or OpenGL source code. You cannot “download” OpenGL. OpenGL is really just a buch of .txt files explaining how functions should be named and what they should do. This spec define an API.
Then, this API can be implemented by anyone, by writing code and putting it in a library.
GPU drivers implement this API. That means that Nvidia, AMD and Intel have their own implementation.
To have access to this API from your program, you have to “getProcAdress” all function you want to use from GPU driver’s DLL. As this is quite painfull, libs exist, like glew, that do it for you. These libs are really just a long list of getProcAdress for all entry points.
That’s also why you cannot “static link” with OpenGL. As function can only be retrieved at runtime.
Another interesting things is MESA. It’s a CPU implementation of OpenGL spec. So MESA is a lib, with source code and so on. You can download it, link against it, etc… This is very useful when you want to do 3D without GPU (yes, this happen, especially in medical domain).
Brave.
Because I installed it when it was pre-alpha version. Ended up to an ugly window with just an addresse bar. I though “this shit will never worked, yet another utopistic project, too bad…”
Then, came back 2 years later, gave him a 2nd chance and “OMG ! They fucking did it !”. So I keep it as a redemption for not having believed in the project at first.
You doesn’t seems to be the kind of person with whom can have constructive argument. I gave you facts and number. Sorry I cannot take my time machine and go back 200 years back telling Great Britain to stop burning coal.
Also, my company has as objective to becomes neutral by 2030 and 20% carbon negative by 2050. Locally, we have decreased our electricity consumption by 20% since 2022 and put in place mobility actions to push people taking bike or bus. Nearly half of employees use soft transport (public, bikes, onewheel, etc…)
We cannot rewrite the past or snap finger to change habits of 8billions peoples.
We will be juge on our current actions and futur results. As of today, we are trying something which we hope is going to the right direction. But its always easier to criticize and not doing anything.
Global Co2 production of human activities is about 35Gt per year (https://ourworldindata.org/co2-emissions). Forests absorb around 7.5Gt per year (https://www.wri.org/insights/forests-absorb-twice-much-carbon-they-emit-each-year). Let say we double the total amount of forest in the whole planet, and we cut Co2 production by half. We are very roughly 15Gt produce VS 15Gt absorb. Is the problem solved ? Nope.
First, because these forests has to stay in place, or used as building material but cannot be burn to for heating. So we still have to plant extra forest for heating. Second, we still have all the Co2 we have put in atmosphere since a century. So the goal is not to be equilibrium, but to be net negative.
Worldwide CCS capacity has been estimated between 8,000 and 55,000 gigatonnes (https://en.wikipedia.org/wiki/Carbon_capture_and_storage). And, yes, it is already carbon negative, and already in production in several countries with currently a net result of ~50Mt Co2 per year (https://www.statista.com/statistics/726634/large-scale-carbon-capture-and-storage-projects-worldwide-capacity/)
There is not a unique solution “Plant Trees and go electric” to global warming. There are lots of solutions, with pros and cons. CCS is just a small part of the equation. Use renewable energy, use storage (litthium batteries, Hydrogen, …), Nuclear, change habit to consume less, plant trees and develop carbon capture solution.
The problem won’t be solved with a unique solution, but by finding the good balance between all the possibilities. And those who know it won’t work are please to let those who doesn’t know try.
Geological reservoirs are thousands metter depth and several dozen of km wide. Pressure is a few MPa, and temperature hundreds of °C. Condition are so extrem that filling them with gaz barely change anything. Especially if they were already filled with gaz dozen years ago. Furthemore, they are not big vacum like most people imagine. It’s more like giant spongy rock, like sand. It’s not a baloon you inflate or deflate.
CCS facilities are not in competition with forest. It’s a complementatry solution. If you manage to capture carbon next to poluting factories, you don’t spread Co2 on the atmosphere, waiting it to be captured by a forest the other side of the globe. And they can be powered by solar panels.
What do you mean ?
Yet another package manager…
apt for life !