I didn’t read super intelligence into it, I read overdoing and I found that it struck home. I don’t know math either, but if I did, I would have done the same calculation.
- 0 Posts
- 21 Comments
Ugh… I got to try those buttons the other day. Might work in an intersection. Horrible in a roundabout!!! No wonder nobody use them!
Tobberone@lemm.eeto ADHD memes@lemmy.dbzer0.com•Growing up with ADHD starter packEnglish7·8 months agoWhat is this “growing up” you talk about?
Tobberone@lemm.eeto Selfhosted@lemmy.world•Self-GPT: Open WebUI + Ollama = Self Hosted ChatGPTEnglish2·9 months agoI’m just in the beginning, but my plan is to use it to evaluate policy docs. There is so much context to keep up with, so any way to load more context into the analysis will be helpful. Learning how to add excel information in the analysis will also be a big step forward.
I will have to check out Mistral:) So far Qwen2.5 14B has been the best at providing analysis of my test scenario. But i guess an even higher parameter model will have its advantages.
Tobberone@lemm.eeto Selfhosted@lemmy.world•Self-GPT: Open WebUI + Ollama = Self Hosted ChatGPTEnglish2·9 months agoThank you! Very useful. I am, again, surprised how a better way of asking questions affects the answers almost as much as using a better model.
Tobberone@lemm.eeto Selfhosted@lemmy.world•Self-GPT: Open WebUI + Ollama = Self Hosted ChatGPTEnglish1·9 months agoI need to look into flash attention! And if i understand you correctly a larger model of llama3.1 would be better prepared to handle a larger context window than a smaller llama3.1 model?
Tobberone@lemm.eeto Selfhosted@lemmy.world•Self-GPT: Open WebUI + Ollama = Self Hosted ChatGPTEnglish1·9 months agoThanks! I actually picked up the concept of context window, and from there how to create a modelfile, through one of the links provided earlier and it has made a huge difference. In your experience, would a small model like llama3.2 with a bigger context window be able to provide the same output as a big modem L, like qwen2.5:14b, with a more limited window? The bigger window obviously allow more data to be taken into account, but how does the model size compare?
Tobberone@lemm.eeto Selfhosted@lemmy.world•Self-GPT: Open WebUI + Ollama = Self Hosted ChatGPTEnglish2·9 months agoThank you for your detailed answer:) it’s 20 years and 2 kids since I last tried my hand at reading code, but I’m doing my best to catch up😊 Context window is a concept I picked up from your links which has provided me much help!
Tobberone@lemm.eeto Selfhosted@lemmy.world•Self-GPT: Open WebUI + Ollama = Self Hosted ChatGPTEnglish1·9 months agoThe problem I keep running into with that approach is that only the last page is actually summarised and some of the texts are… Longer.
Tobberone@lemm.eeto Selfhosted@lemmy.world•Self-GPT: Open WebUI + Ollama = Self Hosted ChatGPTEnglish6·9 months agoDo you know of any nifty resources on how to create RAGs using ollama/webui? (Or even fine-tuning?). I’ve tried to set it up, but the documents provided doesn’t seem to be analysed properly.
I’m trying to get the LLM into reading/summarising a certain type of (wordy) files, and it seems the query prompt is limited to about 6k characters.
Tobberone@lemm.eeto politics @lemmy.world•Mark Robinson may be on his way to a historic defeat in North Carolina1·9 months agoI never said anyone did, i was just reminded about a joke describing how some actions overshadows other activities when retold among peers.
Tobberone@lemm.eeto politics @lemmy.world•Mark Robinson may be on his way to a historic defeat in North Carolina4·9 months agoWhich reminds me of that old joke: “but you fuck one coach…”
Edit: just spotted my typo… I’ll leave it in😊
Tobberone@lemm.eeto politics @lemmy.world•Trump claims audience ‘went crazy’ at debate with Harris – but there was no audience19·10 months agoExactly. His followers expect there to have been an audience and those fictional people should have been crazy about it…
And to make it worse, they are none too concerned with facts to begin with.
Oh, it’s funny because it’s true.
Now I need to go cry in a corner, because it’s true
That’s not a straight line, although it is possible to follow without changing direction😊
Tobberone@lemm.eeto Science@mander.xyz•Unlocking Lupus: Potential Cause and Cure Identified9·1 year agoWell, there is a big discussion about nutrition among lupies, so maybe heuristics was on to something this time. On the other hand, in what group isn’t nutrition a topic?
I did not in any way mean to suggest sensitivity is not a factor, only to suggest that light sensitivity may be more of a spectrum and that there are persons living in a darker world than others. So, it may not be a person on the top of the bell curve that need more light, but someone on the other end of the spectrum entirely.
Since the top comment in this thread was about needing more light in an already bright room i meamt to say that there might be reasons why people around us prefer 1 or 100000 lumen…
Apparently all eyes are not created equal in ability to transfer light to the retina. Some has narrower or wider fields of vision as well. So, where your eyes may be well adapted to low light levels, others may not be. In a world with no artificial shadows and the sun high on the sky for most of the year, being able to filter out sun light might have been a pro, while now needing lots of artificial lights to see straight.
The internet as we know it is dead, we just need a few more years to realise it. And I’m afraid that telecommunications will be going the same way, when no-one can trust that anyone is who they say anymore.