This paper presents a model for how people ask goal-oriented questions
and update their beliefs from answers. They work in the context of a
battleship game, where the set of questions that a person can ask are
pre-defined, as are the answers (e.g. the color on a given board cell,
or whether the ship of a given color is horizontal or vertical).
The hypothesis space
The novelty in this paper is considering that questions are asked with a goal in mind, not to figure out the state of the board. A goal defines a projection of the board space. For example, the goal of "finding out which ships are touching the left border" corresponds to the power set of ships: the possible answers for the target question. Each such answer is associated with a set of boards for which that answer is true.
Under this modelling, you can be Bayesian and maximize expected information
gain to reduce uncertainty for the particular goal in mind. The user
computes the probability of a certain answer by using a uniform prior
over all boards for which the answer would be consistent.
This model seems to explain people's behavior quite well. The model's
scores correlates strongly (
This is an interesting scenario that can be used to model how people pick questions when they have a goal in mind. It turns out that the intuitive explanations work in practice. An extra challenge for building systems that interact with people is, of course, how to formulate questions and interpret free-form answers, and how to work with massive or unbounded hypothesis spaces (e.g. natural language answers), which are out of the scope of the proposed model, but remain as practical challenges.