It would. But it’s a good option when you have computationally heavy tasks and communication is relatively light.
- 1 Post
- 15 Comments
Once configured, Tor Hidden Services also just work (you may need to use some fresh bridges in certain countries if ISPs block Tor there though). You don’t have to trust any specific third party in this case.
Like Firefox ScreenshotGo? (I think it only supports English though)
Audalin@lemmy.worldto Selfhosted@lemmy.world•Any of you have a self-hosted AI "hub"? (e.g. for LLM, stable-diffusion, ...)English5·1 year agoIf your CPU isn’t ancient, it’s mostly about memory speed. VRAM is very fast, DDR5 RAM is reasonably fast, swap is slow even on a modern SSD.
8x7B is mixtral, yeah.
Audalin@lemmy.worldto Selfhosted@lemmy.world•Any of you have a self-hosted AI "hub"? (e.g. for LLM, stable-diffusion, ...)English7·1 year agoMostly via terminal, yeah. It’s convenient when you’re used to it - I am.
Let’s see, my inference speed now is:
- ~60-65 tok/s for a 8B model in Q_5_K/Q6_K (entirely in VRAM);
- ~36 tok/s for a 14B model in Q6_K (entirely in VRAM);
- ~4.5 tok/s for a 35B model in Q5_K_M (16/41 layers in VRAM);
- ~12.5 tok/s for a 8x7B model in Q4_K_M (18/33 layers in VRAM);
- ~4.5 tok/s for a 70B model in Q2_K (44/81 layers in VRAM);
- ~2.5 tok/s for a 70B model in Q3_K_L (28/81 layers in VRAM).
As of quality, I try to avoid quantisation below Q5 or at least Q4. I also don’t see any point in using Q8/f16/f32 - the difference with Q6 is minimal. Other than that, it really depends on the model - for instance, llama-3 8B is smarter than many older 30B+ models.
Audalin@lemmy.worldto Selfhosted@lemmy.world•Any of you have a self-hosted AI "hub"? (e.g. for LLM, stable-diffusion, ...)English10·1 year agoHave been using llama.cpp, whisper.cpp, Stable Diffusion for a long while (most often the first one). My “hub” is a collection of bash scripts and a ssh server running.
I typically use LLMs for translation, interactive technical troubleshooting, advice on obscure topics, sometimes coding, sometimes mathematics (though local models are mostly terrible for this), sometimes just talking. Also music generation with ChatMusician.
I use the hardware I already have - a 16GB AMD card (using ROCm) and some DDR5 RAM. ROCm might be tricky to set up for various libraries and inference engines, but then it just works. I don’t rent hardware - don’t want any data to leave my machine.
My use isn’t intensive enough to warrant measuring energy costs.
Audalin@lemmy.worldto Sync for Lemmy@lemmy.world•I got a new phone, with all the same accounts, but cannot restore my previous lifetime purchases from sync. I'm not sure how to rectify it.English2·1 year agoNo idea whether this would work, but have you tried moving app data via
adb
?
Disabling root login and password auth, using a non-standard port and updating regularly works for me for this exact use case.
I’ve never encountered a keyboard app with UI/UX comparable to Fleksy, so that’s what I use (and UI/UX is everything for a keyboard).
The settings became a bit silly in terms of UI in the course of updates though, I mean specifically the keyboard itself.
Does it? I set its
$PREFIX/etc/resolv.conf
to Cloudflare anddig
uses it fine.
Audalin@lemmy.worldto Android@lemmy.world•What do you use to edit PDF on your phone?English1·1 year agoI don’t usually edit PDFs on my phone. On the PC, I use pdftk+qpdf+img2pdf+ocrmypdf (all command-line apps). Some of those can be found in the default Termux repos once you install the terminal emulator; some, perhaps, could be compiled and used as well.
Audalin@lemmy.worldto Android@lemmy.world•How come I get different app versions when I load f-droid from my desktop compared to loading the f-droid client (app) on my phone?English14·1 year agoSome updates might be restricted to certain architectures, Android versions &c. Some could be beta versions. Or your repositories simply need to be synchronised.
If it isn’t the latter, check the following settings: “Include incompatible versions”, “Include anti-feature apps” and “Unstable updates”.
Audalin@lemmy.worldto Selfhosted@lemmy.world•Welcome to !selfhosted@lemmy.world - What do you selfhost?English1·2 years agoPrimarily as a personal knowledge database, but also management of what, how and when is to be done (not for reminders or external motivation; rather to form a mental picture and understand the priorities). In future, I’ll also use it to track the state of various ongoing affairs as the need arises, and perhaps integrate local programs and APIs into the wiki pages (that’s probably where I’d need to write custom MW extensions).
Audalin@lemmy.worldto Selfhosted@lemmy.world•Welcome to !selfhosted@lemmy.world - What do you selfhost?English4·2 years agoI have a MediaWiki instance on my laptop (I’ve found the features of all other wikis/mindmaps/knowledge databases decisively insufficient after having a taste of MW templates, Semantic MediaWiki and Scribunto).
Also some smaller things like pihole-standalone, Jellyfin and dictd.
I enjoy xenharmonic music and modern academic music the most, but I’m not familiar with everything there, so any recommendations are welcome if you, reader, have something in your mind.