Discussion about this post

User's avatar
Matthias Broecheler's avatar

I like the "AI writes in brown" analogy which comes from the stochastic generation process at the core of LLMs - they don't actually "write".

I remember how amazed I was when I got to see my father's first needle (!) printer in his office. Man, did I waste a lot of ink and paper drawing pictures. But, of course, it doesn't draw it simulates the process through individual dots. That process has become a lot better over time but it still means much lower fidelity compared to actually drawing a picture - meaning you see the difference when you zoom in 100x.

That loss of fidelity is particularly acute with genAI "writing". If what you are trying to convey is truly important, then why would you accept that loss in fidelity?

Eric M. White's avatar

As a reader -- whether consuming a LI post or a GitHub issue I pass over anything anything that's simply AI generated. Why? It puts all the work on me as the reader to make sense of what's being shared and to test it for accuracy and completeness. In very few cases is that an acceptable burden to put on the reader.

1 more comment...

No posts

Ready for more?