I was about to say, a selfhosted LLM means I’m not competing with every market analysis tool, customer service replacement, and 10 y/o kid bombarding the service with junk. It doesn’t need to be ultra fast if I’m the only one using the hardware.
and who’ll supply the model and training and updates and data curation, dom? is it as manna from heaven? do you merely step upon the path and receive the divine wisdom of fresh llm updates?
Base open source model just means some company commanding a great deal of capital and compute made the weights public to fuck with LLMaaS providers it can’t directly compete with yet, it’s not some guy in a garage training and RLFH them for months on end just to hand the result over to you to fine tune for writing caiaphas cain fanfiction.
I was about to say, a selfhosted LLM means I’m not competing with every market analysis tool, customer service replacement, and 10 y/o kid bombarding the service with junk. It doesn’t need to be ultra fast if I’m the only one using the hardware.
and who’ll supply the model and training and updates and data curation, dom? is it as manna from heaven? do you merely step upon the path and receive the divine wisdom of fresh llm updates?
fucking hell
Base open source model.
Topic expert models.
Community lora.
Program extensions.
Look what comfy UI + Stable Diffusion can achieve.
Base open source model just means some company commanding a great deal of capital and compute made the weights public to fuck with LLMaaS providers it can’t directly compete with yet, it’s not some guy in a garage training and RLFH them for months on end just to hand the result over to you to fine tune for writing caiaphas cain fanfiction.
And with the pruned llama models, it runs really quickly on a 2070.