While you are reading this story, you may well be interrupted by a phone call, a text message, an email, or even a flesh and blood person wanting a chat. Hopefully, your human interrupter will, with a quick glance, assess what you are doing and whether it is OK to bother you. However, mobile phones and computers show no such politeness...yet.
"Today's digital lifestyle has the unfortunate side effect of bombarding people with messages from many devices all the time, regardless of whether they're willing, or able to respond," says Roel Vertegaal, from the Human Media Lab, Queen's University, Canada. "We now need computers that sense when we are busy, when we are available for interruption, and know when to wait their turn - just as we do in human-to-human interactions."
What sets people (the considerate ones) apart from computers is that people can judge if now is a good time to interrupt - if you are working hard on a report, taking an important call or just surfing the web. And if it isn't a good time, they can judge if the message is so important that they should bother you anyway. But computers may be about to learn some social skills. Researchers such as Vertegaal are developing interfaces for the array of information devices in our lives, called Attentive User Interfaces (AUIs), that will mimic the considered judgement of humans. They reported on their progress at the ACM CHI 2003 conference on Human Factors in Computing Systems in April in Florida, and in a special issue of the journal Communications of the ACM in March, edited by Vertegaal.
In his editorial, Vertegaal compared AUIs with a well-tuned set of traffic lights. "Modern systems use sensors in the road to determine where users will go next - their future focus," he wrote. "They employ statistical models of traffic volume to determine the priority of user requests for intersection space. They use peripheral displays - traffic lights - to negotiate turn-taking activities of users sharing the limited real estate of an intersection. Likewise, AUIs may measure and model the focus and priorities of their user's attention. They structure their communication such that the limited resource of user attention is allocated optimally across the user's tasks."
When people observe you working, they intuitively gauge your posture, what you are looking at, and how deep in concentration you seem. To help computing devices make the same judgment call, researchers are using similar information from sensors - cameras analysing pose and gaze, microphones picking up sound including speech, GPS giving location - along with more traditionally available information such as the time and day of the week, your online schedule, and how you are interacting with software and devices.
This stream of clues is the input for a mathematical model, which functions as an "attentional Sherlock Holmes" according to Eric Horvitz and his colleagues from Microsoft Research, in an article in the same issue of the journal. "Bayesian attentional models take as inputs sensors that provide streams of evidence about the attention and provide a means for computing probability distributions over a user's attention and intentions."
A Bayesian model starts with some a priori knowledge, or belief, about how a system works, represented by a prior probability distribution for a model's structure and parameters - what the variables are and how they influence each other. As observations are made, this knowledge is updated to obtain a posterior probability distribution to reflect the evidence.
Useful or annoying?
Image DHD Photo Gallery
Bayesian models have been widely used, for example to diagnose diseases and make search engines more helpful. Now researchers are hoping that these new computing interfaces will help us cope with the onslaught of information, by combining this mathematical technique with a psychological understanding of how we behave.
You may already have been exposed to an earlier application of Bayesian models - the infamous paper-clip office assistant from Microsoft Office. It monitored user activity - actions taken and undone, buttons clicked, menus browsed and length of pauses - and input this information into a Bayesian model that continually assessed whether you were in need of assistance, and what you might need help with. It was an impressive attempt to make a computer more helpful, but as often happens with early applications, the paper-clip was sometimes an annoying pest. However, as models become more sophisticated, and with continuing user testing, Horvitz hopes that AUIs will "change the way it feels to work with machines".
So the next time the paperclip asks you: "Would you like help writing that letter?" remember that one day it might be more helpfully offering to hold your calls while you watch the FA Cup final.