I put bugs in software. Master of half-finished projects. Irony is my default setting. Hate factories, but love conveyor belts. Buzzword-Driven Development. Edible. Totally not a robot. Definitely not a mouse. Pan-Demi-Sexual, _not_ Pandemic-Sexual. // Your Plastic Pal Who's Fun to Be With # YesBots // Cats of all sizes also welcome # YesKittens # YesCats # YesTigers
I put bugs in software. Master of half-finished projects. Irony is my default setting. Hate factories, but love conveyor belts. Buzzword-Driven Development. Edible. Totally not a robot. Definitely not a mouse. Pan-Demi-Sexual, _not_ Pandemic-Sexual. // Your Plastic Pal Who's Fun to Be With # YesBots // Cats of all sizes also welcome # YesKittens # YesCats # YesTigers
Don't use cloud stuff. Just run your LLMs locally, if you have to.
But you have an old GPU? Just take a smaller model.
"But the quality of smaller/local models is worse?"
Well, since you are using LLMs, I assumed that you didn't care about quality.
(There are some proper use cases for LLMs, especially regarding accessibility. I am totally for using LLMs in that area.)
But in general: You want bad text, bad pictures, bad code. Stop clinging to old concepts of "better or worse". You want quick, not good.