Are you running ollama in the same system as the one consuming it ? If yes does it always run in background ? Does it impact performance of other applications when it runs in background?
No, Ollama is running on an old PC with a GeForce 1060 and 16gig of ram…
Yes, it’s a “webserver” running in the background exposing an API.
However, if I “top” my system, without chatting, it sits at 0% usage; it’s only when asking that the system peeks at around 55-70% CPU.
You have to understand there is 2 things here: the server and the model. The server is always running, but requires next to nothing in terms of resources.
The model is what computing your questions, this is the heavy part. It’s started on use, then after a delay, it’s closing.
TL;DR
To answer your real question, you could use Ollama on the same system that you are using.
During that time, you can easily install Ollama on an old computer.
With a client like Oatmeal, you can save your session/ reload/delete as you wish; so your model remembers what you want.
I am running llama3.1:8b, it’s good enough for the day-to-day operations.
My old computer is apparently “not good enough” for windows 11, but it’s surely good enough for my personal AI running on Linux though!
Interesting. A few questions, if I may.
Are you running ollama in the same system as the one consuming it ? If yes does it always run in background ? Does it impact performance of other applications when it runs in background?
No, Ollama is running on an old PC with a GeForce 1060 and 16gig of ram…
Yes, it’s a “webserver” running in the background exposing an API.
However, if I “top” my system, without chatting, it sits at 0% usage; it’s only when asking that the system peeks at around 55-70% CPU.
You have to understand there is 2 things here: the server and the model. The server is always running, but requires next to nothing in terms of resources.
The model is what computing your questions, this is the heavy part. It’s started on use, then after a delay, it’s closing.
TL;DR To answer your real question, you could use Ollama on the same system that you are using.
I tried llama3.1:8b and it’s absolutely horrible.