What is an acceptable token speed (t/s) for your daily local LLM use?

~5 t/s (Very slow - unusable for longer paragraph generation)
% (0 votes)
~10 t/s (Still slow - typical for CPU-only inference)
% (0 votes)
~20 t/s (Mostly usable for light chat or short completions)
% (2 votes)
~40 t/s (Much faster than the average reading speed)
% (1 votes)
~60 t/s (Smooth for most common tasks)
% (0 votes)
~100 t/s (Very fast, comfortable for long code snippet generation)
% (0 votes)
>100 t/s (Feels near-instant when it comes to short paragraphs)
% (0 votes)