# The communicative function of ambiguity in language (@ Cognition 2012)

### Steven T. Piantadosi, Harry Tily, Edward Gibson

This paper is beautiful. It condenses a lot of my thinking about how computer interfaces should move towards in its explanation of why any communication channel where context is informative (e.g. human language) will necessarily have ambiguity in its basic communicative units (e.g. words). The paper paper provides two arguments, both of which are very simple. If context $C$ is informative about meaning, then the entropy of a word in context (i.e. the number of bits it conveys) must be greater than the entropy of the word out of context if the channel is efficient; otherwise, if they are the same, then the word is not using the fact that context is informative, and is giving redundant bits. Therefore, the information of a word in context must be greater than out of context, so the word out of context will be ambiguous. Then, there's a second argument related to the cost of different lingustic units (i.e. words). Some words are most costly to communicate than others (i.e. longer, harder to pronounce, etc). So if there are two meanings with two different words to convey them, and these two meanings cannot be confused based on the context (e.g. verb bear and animal bear), and one word is more expensive than the other, then the channel can be improved if the two words become equal to the cheaper word, without any loss in efficacy. So, over time, as languages naturally evolve to be more efficient, they introduce ambiguity for a reason of efficiency.