This paper is beautiful. It condenses a lot of my thinking about how computer interfaces should move towards in its explanation of why any communication channel where context is informative (e.g. human language) will necessarily have ambiguity in its basic communicative units (e.g. words). The paper paper provides two arguments, both of which are very simple. If context $C$ is informative about meaning, then the entropy of a word in context (i.e. the number of bits it conveys) must be greater than the entropy of the word out of context if the channel is efficient; otherwise, if they are the same, then the word is not using the fact that context is informative, and is giving redundant bits. Therefore, the information of a word in context must be greater than out of context, so the word out of context will be ambiguous. Then, there's a second argument related to the cost of different lingustic units (i.e. words). Some words are most costly to communicate than others (i.e. longer, harder to pronounce, etc). So if there are two meanings with two different words to convey them, and these two meanings cannot be confused based on the context (e.g. verb bear and animal bear), and one word is more expensive than the other, then the channel can be improved if the two words become equal to the cheaper word, without any loss in efficacy. So, over time, as languages naturally evolve to be more efficient, they introduce ambiguity for a reason of efficiency.
After these arguments, the paper shows some empirical evidence of predictions that the arguments make on English, German and Dutch, and they hold pretty well and robustly.
It's also very cool that from this paper I got to know where the Zipf distribution comes from (trying to model the distribution of words in human languages).
My thesis has been that ambiguity should be embraced by our computer interfaces. The paper cites Levinson saying that "human cognitive abilities will favor communication systems which are heavy on hearer inference and light on speaker effort", because "inference is cheap, articulation expensive, and thus the design requirements are for a system that maximizes inference". Exactly! But the way we communicate with computers, speakers (users) must spell out everything in painstaking detail. Listeners do basically no inference. This makes using computers painful and innefficient. Let's change that.