Ollama reposted this
Ollama utterly crushing it. 🤘🤘🤘
It’s awesome to see Ollama doing so well, Jono! The growth and energy around this project are truly impressive. Keep up the great work!
I have recently used quite a bit of Ollama, interacting very well with Langchain and LangSmith using the Llama 3.2 installed in my local PC. I can very clearly see the value of it if hosting a LLM locally is a preferred choice due to data privacy, regulatory or latency concerns…
I use ollama literally every day.
I like Ollama as well specifically use a model in my local like an edge device. Also, would love to see the use cases and problems what Ollama have been chosen most.
This project is truly thriving and healthy Jono Bacon https://www.kifinity.com/github/ollama/ollama
Ollama is exactly what llama.cpp needed, so ez
Great for running small LLMs on edge, locally and boon for decentralized AI with personal and private AI agentic workflows.
I love Ollama - it's so useful
SWE @ Brokers Times | GCE | MLH Fellow @ Meta | GDSC Mentor | ALX SE Alumni | SIC Alumni
2wAnd then there is the PR for vulkan support that hasn't received any responses for months from the Ollama team. Vulkan support is a game changer and I really hope they prioritize it.