#localllm

2 posts · Last used 2d

Back to Timeline
@jwp@cloudisland.nz · 2d ago
Have pushed 0.9.5-dev branch to codeberg of foxing ( https://codeberg.org/aenertia/foxing/src/branch/0.9.5-dev ) in preparation for release tagging. A LOT of features and a couple of bug-fixes now the packet/file processing engine has stabilized ; including Semantic Routing to Parsers for Metadata Extraction and in-path Binary analysis using local ORT/BERT models ; letting you get semantic search powers for free when you copy something with foxingd/fxcp #linux #filesystem #bert #vectordb #postgres #xfs #stratis #blake3 #localllm
0
0
0
@nelov@social.linux.pizza · 6d ago
I've been playing with #LMStudio for #localLLM with mediocre results. However #gemma4 really changed that. It's faster and is more capable then the other models I could try on my hardware. It has recent data and is able to use a fetch tool(among others) to get info on stuff it doesn't know! So I installed #ollama and now it runs even faster, to the point where delay(waiting) is not that noticeable. Since I am a lightweight user, I can see myself using it as mainly source.
0
0
0

You've seen all posts