girls <3
You can run an LLM on a phone (tried it myself once, with llama.cpp), but even on the simplest model I could find it was doing maybe one word every few seconds while using up 100% of the CPU. The quality is terrible, and your battery wouldn’t last an hour.
A study from 1989 doesn’t apply to modern plants built 35 years later, it really doesn’t make sense to extrapolate it like this.
I would rather do that instead of indirectly killing a bunch of unwilling people, yeah.
what about the best motherfucking website?
Lemmy but twitter instead of reddit.
Yeah you’re right, I just felt the need to point out that API calls are not really comparable to serving a full website.
The thing is that when you interact with the remote server directly it’s not 10 api calls, it’s 10 full-blown HTML webpages that have to be served to you, which are way bigger than REST API calls.
A Very Polish Christmas by Sabadu.