This article is inspired by the ideas of cybernetics, Martin Heidegger, and Nassim Taleb. I’m looking at what I think is the largest danger of large language models (LLMs).
|
ADVERTISEMENT |
LLMs are extraordinarily proficient in the domain of language, and that proficiency has quietly created a philosophical problem that most technical discussions fail to notice. LLMs speak fluently, respond coherently, adapt style with ease, and generate text that fits seamlessly into human patterns of explanation and reflection.
The danger doesn’t lie in what they get wrong, but in how convincing they are when they appear to get things right. Fluency triggers attribution, and attribution tempts us to confuse linguistic competence with lived understanding. We humans are prone to jump to attributions. We seek purpose in anything and everything.
…

Comments
Excellent!
Looking at the world through this lens reveals that even people can be sorted across this domain. Some people seem to value the primacy of words more than their reality, while others apply more value the reality in front of "their own two eyes". These groups are often in conflict, and on average neither one is more correct than the other. Because reality is independent of the domain. Thanks for a thought provoking read.
Add new comment