Inferring capabilities of intelligent agents from their external traits
Human-like agents are an often-used interaction metaphor for natural language-based systems. Users seem to intuitively use a more human-like conversation style towards such agents. Although this may be beneficial in some cases, the agent may in other cases cause severe overestimation of the system capabilities. This may inadvertently reduce the usability of the system.
I created an human-like agent-based interface for a mock-up mobile device that would be used to get travel and ticket information for the Dutch Railways. The agent is controlled using a "free input" text area, and provides textual responses.
As natural language processing capabilities are not flawless, I implemented the system as a Wizard of Oz prototype. Users were made to believe that they were using an actual system, while in fact all responses were sent over the Internet to an experimenter who analyzed the responses and provided standardized answers (see figure below).
In the experiment I independently varied the system capabilities (high - system understands almost everything versus low - system does not understand complex input) and the human-like cues (very human-like versus very computer-like) provided by the system.
As expected, the capable systems were very usable. Notably, users provided more human-like responses and tried to exploit more human-like capabilities when the system looked more human.
The usability in the less capable system was a lot lower. Furthermore, users of the systems with a human-like character were very confused that the system could not handle their input: 5 out of 23 participants in this condition stopped the experiment before even performing a single task. Users of the computer-like interface learned much faster that the system was not very capable and adjusted their input accordingly.
Agent-based interfaces are a powerful metaphor, but the metaphor is too powerful for most of our current systems. Interaction designers should be very careful when considering an agent as interface paradigm. Usability researchers should try to find out how to prevent overestimation in agent-based systems. They may have to draw upon social psychological research on expectation management and on linguistic research on principles of inter-human conversation.
Knijnenburg, B.P., Willemsen, M.C.: Inferring capabilities of intelligent agents from their external traits. Paper at the Annual Pre-ICIS Workshop on HCI Research in MIS (SigHCI), Auckland, New Zealand, download here.